Test Report: KVM_Linux 17297

                    
                      d70abdd8c088cadcf8720531a75f8262065eb1b0:2023-09-25:31157
                    
                

Test fail (3/315)

Order failed test Duration
345 TestStartStop/group/old-k8s-version/serial/SecondStart 933.21
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.38
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 133.22
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (933.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: exit status 80 (15m31.251483455s)

                                                
                                                
-- stdout --
	* [old-k8s-version-694015] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-694015 in cluster old-k8s-version-694015
	* Restarting existing kvm2 VM for "old-k8s-version-694015" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694015 addons enable metrics-server	
	
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:24:40.587662   57426 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:24:40.587801   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:24:40.587813   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:24:40.587820   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:24:40.588100   57426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 11:24:40.588816   57426 out.go:303] Setting JSON to false
	I0925 11:24:40.590066   57426 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4032,"bootTime":1695637049,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:24:40.590144   57426 start.go:138] virtualization: kvm guest
	I0925 11:24:40.592274   57426 out.go:177] * [old-k8s-version-694015] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:24:40.594623   57426 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:24:40.596436   57426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:24:40.594591   57426 notify.go:220] Checking for updates...
	I0925 11:24:40.598264   57426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:24:40.599930   57426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 11:24:40.601598   57426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:24:40.603255   57426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:24:40.605387   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:24:40.606018   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:24:40.606071   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:24:40.626954   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0925 11:24:40.628060   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:24:40.628684   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:24:40.628740   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:24:40.629148   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:24:40.629378   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:24:40.631543   57426 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0925 11:24:40.633238   57426 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:24:40.633674   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:24:40.633745   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:24:40.649026   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0925 11:24:40.649692   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:24:40.650276   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:24:40.650328   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:24:40.650641   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:24:40.650833   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:24:40.690486   57426 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 11:24:40.691928   57426 start.go:298] selected driver: kvm2
	I0925 11:24:40.691940   57426 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:24:40.692057   57426 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:24:40.692693   57426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:24:40.692779   57426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 11:24:40.707177   57426 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0925 11:24:40.707636   57426 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:24:40.707677   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:24:40.707702   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:24:40.707715   57426 start_flags.go:321] config:
	{Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:24:40.707942   57426 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:24:40.710861   57426 out.go:177] * Starting control plane node old-k8s-version-694015 in cluster old-k8s-version-694015
	I0925 11:24:40.712423   57426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 11:24:40.712460   57426 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0925 11:24:40.712472   57426 cache.go:57] Caching tarball of preloaded images
	I0925 11:24:40.712562   57426 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 11:24:40.712577   57426 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 11:24:40.712708   57426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/config.json ...
	I0925 11:24:40.712889   57426 start.go:365] acquiring machines lock for old-k8s-version-694015: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:24:40.712934   57426 start.go:369] acquired machines lock for "old-k8s-version-694015" in 24.9µs
	I0925 11:24:40.712951   57426 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:24:40.712964   57426 fix.go:54] fixHost starting: 
	I0925 11:24:40.713244   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:24:40.713271   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:24:40.727190   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0925 11:24:40.727613   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:24:40.728064   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:24:40.728087   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:24:40.728504   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:24:40.728754   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:24:40.728912   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:24:40.730893   57426 fix.go:102] recreateIfNeeded on old-k8s-version-694015: state=Stopped err=<nil>
	I0925 11:24:40.730919   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	W0925 11:24:40.731114   57426 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:24:40.733151   57426 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-694015" ...
	I0925 11:24:40.734539   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Start
	I0925 11:24:40.734798   57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring networks are active...
	I0925 11:24:40.736933   57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring network default is active
	I0925 11:24:40.737407   57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring network mk-old-k8s-version-694015 is active
	I0925 11:24:40.737983   57426 main.go:141] libmachine: (old-k8s-version-694015) Getting domain xml...
	I0925 11:24:40.738815   57426 main.go:141] libmachine: (old-k8s-version-694015) Creating domain...
	I0925 11:24:42.307156   57426 main.go:141] libmachine: (old-k8s-version-694015) Waiting to get IP...
	I0925 11:24:42.308255   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:42.308900   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:42.309007   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.308888   57460 retry.go:31] will retry after 222.729566ms: waiting for machine to come up
	I0925 11:24:42.533808   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:42.534385   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:42.534423   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.534337   57460 retry.go:31] will retry after 362.103622ms: waiting for machine to come up
	I0925 11:24:42.898185   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:42.898750   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:42.898780   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.898698   57460 retry.go:31] will retry after 476.874033ms: waiting for machine to come up
	I0925 11:24:43.377385   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:43.377864   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:43.377888   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:43.377815   57460 retry.go:31] will retry after 439.843301ms: waiting for machine to come up
	I0925 11:24:43.819586   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:43.820106   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:43.820129   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:43.820067   57460 retry.go:31] will retry after 639.618656ms: waiting for machine to come up
	I0925 11:24:44.461710   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:44.462257   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:44.462285   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:44.462194   57460 retry.go:31] will retry after 764.340612ms: waiting for machine to come up
	I0925 11:24:45.228293   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:45.228867   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:45.228892   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:45.228810   57460 retry.go:31] will retry after 795.396761ms: waiting for machine to come up
	I0925 11:24:46.025469   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:46.025910   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:46.025952   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:46.025891   57460 retry.go:31] will retry after 1.29674171s: waiting for machine to come up
	I0925 11:24:47.324945   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:47.325583   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:47.325615   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:47.325529   57460 retry.go:31] will retry after 1.518748069s: waiting for machine to come up
	I0925 11:24:48.845862   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:48.846458   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:48.846518   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:48.846423   57460 retry.go:31] will retry after 1.604353924s: waiting for machine to come up
	I0925 11:24:50.452522   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:50.453382   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:50.453412   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:50.453324   57460 retry.go:31] will retry after 2.86199606s: waiting for machine to come up
	I0925 11:24:53.317639   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:53.318141   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:53.318177   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:53.318064   57460 retry.go:31] will retry after 3.10153544s: waiting for machine to come up
	I0925 11:24:56.420998   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:56.421569   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
	I0925 11:24:56.421598   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:56.421546   57460 retry.go:31] will retry after 2.981021856s: waiting for machine to come up
	I0925 11:24:59.405685   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.406220   57426 main.go:141] libmachine: (old-k8s-version-694015) Found IP for machine: 192.168.50.17
	I0925 11:24:59.406248   57426 main.go:141] libmachine: (old-k8s-version-694015) Reserving static IP address...
	I0925 11:24:59.406265   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has current primary IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.406768   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "old-k8s-version-694015", mac: "52:54:00:e6:28:7c", ip: "192.168.50.17"} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.406802   57426 main.go:141] libmachine: (old-k8s-version-694015) Reserved static IP address: 192.168.50.17
	I0925 11:24:59.406820   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | skip adding static IP to network mk-old-k8s-version-694015 - found existing host DHCP lease matching {name: "old-k8s-version-694015", mac: "52:54:00:e6:28:7c", ip: "192.168.50.17"}
	I0925 11:24:59.406839   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Getting to WaitForSSH function...
	I0925 11:24:59.406867   57426 main.go:141] libmachine: (old-k8s-version-694015) Waiting for SSH to be available...
	I0925 11:24:59.408976   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.409297   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.409327   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.409411   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Using SSH client type: external
	I0925 11:24:59.409462   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa (-rw-------)
	I0925 11:24:59.409503   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0925 11:24:59.409523   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | About to run SSH command:
	I0925 11:24:59.409539   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | exit 0
	I0925 11:24:59.548605   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | SSH cmd err, output: <nil>: 
	I0925 11:24:59.549006   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetConfigRaw
	I0925 11:24:59.549595   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
	I0925 11:24:59.552192   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.552618   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.552647   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.552987   57426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/config.json ...
	I0925 11:24:59.553160   57426 machine.go:88] provisioning docker machine ...
	I0925 11:24:59.553175   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:24:59.553385   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
	I0925 11:24:59.553549   57426 buildroot.go:166] provisioning hostname "old-k8s-version-694015"
	I0925 11:24:59.553575   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
	I0925 11:24:59.553713   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:24:59.556121   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.556490   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.556520   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.556726   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:24:59.556879   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:24:59.557011   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:24:59.557173   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:24:59.557338   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:24:59.557680   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:24:59.557698   57426 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-694015 && echo "old-k8s-version-694015" | sudo tee /etc/hostname
	I0925 11:24:59.703561   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-694015
	
	I0925 11:24:59.703603   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:24:59.706307   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.706671   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.706711   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.706822   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:24:59.707048   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:24:59.707221   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:24:59.707379   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:24:59.707553   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:24:59.708033   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:24:59.708065   57426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-694015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-694015/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-694015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:24:59.841494   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:24:59.841538   57426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
	I0925 11:24:59.841568   57426 buildroot.go:174] setting up certificates
	I0925 11:24:59.841579   57426 provision.go:83] configureAuth start
	I0925 11:24:59.841592   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
	I0925 11:24:59.841896   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
	I0925 11:24:59.844771   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.845085   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.845118   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.845393   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:24:59.847727   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.848180   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:24:59.848233   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:24:59.848332   57426 provision.go:138] copyHostCerts
	I0925 11:24:59.848387   57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
	I0925 11:24:59.848397   57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
	I0925 11:24:59.848463   57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
	I0925 11:24:59.848546   57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
	I0925 11:24:59.848556   57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
	I0925 11:24:59.848580   57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
	I0925 11:24:59.848627   57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
	I0925 11:24:59.848634   57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
	I0925 11:24:59.848656   57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
	I0925 11:24:59.848728   57426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-694015 san=[192.168.50.17 192.168.50.17 localhost 127.0.0.1 minikube old-k8s-version-694015]
	I0925 11:25:00.081298   57426 provision.go:172] copyRemoteCerts
	I0925 11:25:00.081368   57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:25:00.081389   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:00.084399   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.084826   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:00.084858   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.084992   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:00.085180   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.085351   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:00.085503   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:25:00.183002   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:25:00.209364   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0925 11:25:00.233825   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:25:00.259218   57426 provision.go:86] duration metric: configureAuth took 417.624647ms
	I0925 11:25:00.259249   57426 buildroot.go:189] setting minikube options for container-runtime
	I0925 11:25:00.259461   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:25:00.259489   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:00.259745   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:00.261859   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.262253   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:00.262282   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.262406   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:00.262594   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.262757   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.262928   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:00.263085   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:25:00.263525   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:25:00.263543   57426 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 11:25:00.390987   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 11:25:00.391008   57426 buildroot.go:70] root file system type: tmpfs
	I0925 11:25:00.391096   57426 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 11:25:00.391127   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:00.394147   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.394541   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:00.394577   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.394694   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:00.394876   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.395024   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.395180   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:00.395365   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:25:00.395679   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:25:00.395748   57426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 11:25:00.538360   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 11:25:00.538398   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:00.541330   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.541684   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:00.541732   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:00.541988   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:00.542195   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.542376   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:00.542524   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:00.542734   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:25:00.543262   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:25:00.543290   57426 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 11:25:01.431723   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 11:25:01.431753   57426 machine.go:91] provisioned docker machine in 1.878579847s
	I0925 11:25:01.431766   57426 start.go:300] post-start starting for "old-k8s-version-694015" (driver="kvm2")
	I0925 11:25:01.431779   57426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:25:01.431799   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:01.432193   57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:25:01.432230   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:01.435233   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.435611   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:01.435643   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.435778   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:01.435966   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:01.436127   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:01.436275   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:25:01.540619   57426 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:25:01.545212   57426 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 11:25:01.545237   57426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
	I0925 11:25:01.545315   57426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
	I0925 11:25:01.545418   57426 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
	I0925 11:25:01.545526   57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:25:01.554611   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:25:01.580258   57426 start.go:303] post-start completed in 148.474128ms
	I0925 11:25:01.580284   57426 fix.go:56] fixHost completed within 20.867322519s
	I0925 11:25:01.580307   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:01.583254   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.583724   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:01.583768   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.583940   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:01.584118   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:01.584263   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:01.584378   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:01.584595   57426 main.go:141] libmachine: Using SSH client type: native
	I0925 11:25:01.584952   57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0925 11:25:01.584966   57426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 11:25:01.713860   57426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641101.690775078
	
	I0925 11:25:01.713885   57426 fix.go:206] guest clock: 1695641101.690775078
	I0925 11:25:01.713895   57426 fix.go:219] Guest: 2023-09-25 11:25:01.690775078 +0000 UTC Remote: 2023-09-25 11:25:01.58028895 +0000 UTC m=+21.033561482 (delta=110.486128ms)
	I0925 11:25:01.713933   57426 fix.go:190] guest clock delta is within tolerance: 110.486128ms
	I0925 11:25:01.713941   57426 start.go:83] releasing machines lock for "old-k8s-version-694015", held for 21.00099493s
	I0925 11:25:01.713974   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:01.714233   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
	I0925 11:25:01.717127   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.717478   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:01.717511   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.717663   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:01.718160   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:01.718312   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:25:01.718388   57426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:25:01.718432   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:01.718529   57426 ssh_runner.go:195] Run: cat /version.json
	I0925 11:25:01.718553   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:25:01.721364   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.721628   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.721736   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:01.721766   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.721931   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:01.722037   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:01.722099   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:01.722104   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:01.722253   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:01.722340   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:25:01.722414   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:25:01.722485   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:25:01.722621   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:25:01.722755   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:25:01.847665   57426 ssh_runner.go:195] Run: systemctl --version
	I0925 11:25:01.855260   57426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 11:25:01.862482   57426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 11:25:01.862548   57426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0925 11:25:01.875229   57426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0925 11:25:01.897491   57426 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:25:01.897526   57426 start.go:469] detecting cgroup driver to use...
	I0925 11:25:01.897667   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:25:01.918886   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0925 11:25:01.929912   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 11:25:01.941679   57426 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 11:25:01.941732   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 11:25:01.955647   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:25:01.969463   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 11:25:01.983215   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:25:01.996913   57426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 11:25:02.010860   57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 11:25:02.023730   57426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 11:25:02.035214   57426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 11:25:02.047150   57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:25:02.199973   57426 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 11:25:02.224251   57426 start.go:469] detecting cgroup driver to use...
	I0925 11:25:02.224336   57426 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 11:25:02.245450   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:25:02.260076   57426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:25:02.284448   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:25:02.302774   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:25:02.322905   57426 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 11:25:02.361137   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:25:02.377691   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:25:02.398134   57426 ssh_runner.go:195] Run: which cri-dockerd
	I0925 11:25:02.402981   57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 11:25:02.414547   57426 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 11:25:02.432822   57426 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 11:25:02.563375   57426 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 11:25:02.706840   57426 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 11:25:02.706978   57426 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 11:25:02.728994   57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:25:02.849318   57426 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:25:04.344306   57426 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.494952682s)
	I0925 11:25:04.344377   57426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:25:04.378626   57426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:25:04.413309   57426 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I0925 11:25:04.413355   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
	I0925 11:25:04.415927   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:04.416288   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:25:04.416329   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:25:04.416513   57426 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0925 11:25:04.421006   57426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:25:04.436069   57426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 11:25:04.436130   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:25:04.457302   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0925 11:25:04.457326   57426 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0925 11:25:04.457370   57426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 11:25:04.466202   57426 ssh_runner.go:195] Run: which lz4
	I0925 11:25:04.469996   57426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 11:25:04.474022   57426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 11:25:04.474044   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0925 11:25:06.107255   57426 docker.go:628] Took 1.637292 seconds to copy over tarball
	I0925 11:25:06.107326   57426 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 11:25:08.816016   57426 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708661547s)
	I0925 11:25:08.816052   57426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 11:25:08.850512   57426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 11:25:08.859144   57426 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3100 bytes)
	I0925 11:25:08.875250   57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:25:08.979616   57426 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:25:10.698985   57426 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.719331571s)
	I0925 11:25:10.699077   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:25:10.721016   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I0925 11:25:10.721043   57426 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0925 11:25:10.721053   57426 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 11:25:10.722442   57426 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0925 11:25:10.722491   57426 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0925 11:25:10.722454   57426 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0925 11:25:10.722454   57426 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:25:10.722455   57426 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0925 11:25:10.722460   57426 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0925 11:25:10.722480   57426 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0925 11:25:10.722482   57426 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0925 11:25:10.723053   57426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0925 11:25:10.723206   57426 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:25:10.723233   57426 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0925 11:25:10.723284   57426 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0925 11:25:10.723291   57426 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0925 11:25:10.723294   57426 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0925 11:25:10.723284   57426 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0925 11:25:10.723727   57426 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0925 11:25:10.885160   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0925 11:25:10.886038   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0925 11:25:10.886075   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0925 11:25:10.901732   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0925 11:25:10.910884   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0925 11:25:10.922280   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0925 11:25:10.922280   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0925 11:25:10.935346   57426 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0925 11:25:10.935395   57426 docker.go:317] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0925 11:25:10.935441   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0925 11:25:10.948420   57426 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0925 11:25:10.948528   57426 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0925 11:25:10.948434   57426 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0925 11:25:10.948624   57426 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0925 11:25:10.948693   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0925 11:25:10.948579   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0925 11:25:10.988590   57426 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0925 11:25:10.988640   57426 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0925 11:25:10.988694   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0925 11:25:10.991956   57426 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0925 11:25:10.992011   57426 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0925 11:25:10.992039   57426 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0925 11:25:10.992050   57426 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.2
	I0925 11:25:10.992087   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0925 11:25:10.992119   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0925 11:25:10.992120   57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0925 11:25:11.015899   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0925 11:25:11.022253   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0925 11:25:11.035117   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0925 11:25:11.045414   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0925 11:25:11.045501   57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0925 11:25:11.348790   57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:25:11.374133   57426 cache_images.go:92] LoadImages completed in 653.062439ms
	W0925 11:25:11.374241   57426 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0925 11:25:11.374312   57426 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 11:25:11.405963   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:25:11.405993   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:25:11.406013   57426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 11:25:11.406037   57426 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-694015 NodeName:old-k8s-version-694015 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0925 11:25:11.406231   57426 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-694015"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-694015
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.17:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 11:25:11.406343   57426 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-694015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 11:25:11.406419   57426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0925 11:25:11.416154   57426 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 11:25:11.416229   57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 11:25:11.426088   57426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0925 11:25:11.443617   57426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 11:25:11.461066   57426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0925 11:25:11.477277   57426 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0925 11:25:11.481098   57426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:25:11.492472   57426 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015 for IP: 192.168.50.17
	I0925 11:25:11.492519   57426 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:25:11.492715   57426 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
	I0925 11:25:11.492775   57426 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
	I0925 11:25:11.492891   57426 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/client.key
	I0925 11:25:11.492969   57426 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.key.6142b612
	I0925 11:25:11.493032   57426 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.key
	I0925 11:25:11.493176   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
	W0925 11:25:11.493218   57426 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
	I0925 11:25:11.493234   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 11:25:11.493273   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
	I0925 11:25:11.493311   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
	I0925 11:25:11.493347   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
	I0925 11:25:11.493409   57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:25:11.494801   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 11:25:11.522161   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 11:25:11.549159   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 11:25:11.575972   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 11:25:11.597528   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 11:25:11.619284   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 11:25:11.642480   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 11:25:11.665449   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 11:25:11.687812   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
	I0925 11:25:11.711371   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 11:25:11.735934   57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
	I0925 11:25:11.757797   57426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 11:25:11.773891   57426 ssh_runner.go:195] Run: openssl version
	I0925 11:25:11.779561   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
	I0925 11:25:11.790731   57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
	I0925 11:25:11.796032   57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
	I0925 11:25:11.796080   57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
	I0925 11:25:11.801704   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 11:25:11.813138   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 11:25:11.823852   57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:25:11.828441   57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:25:11.828493   57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:25:11.834206   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 11:25:11.845200   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
	I0925 11:25:11.858934   57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
	I0925 11:25:11.864927   57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
	I0925 11:25:11.864974   57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
	I0925 11:25:11.871976   57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
	I0925 11:25:11.885846   57426 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 11:25:11.890495   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 11:25:11.896654   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 11:25:11.902657   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 11:25:11.908626   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 11:25:11.914386   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 11:25:11.920901   57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 11:25:11.927115   57426 kubeadm.go:404] StartCluster: {Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:25:11.927268   57426 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:25:11.949369   57426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 11:25:11.961069   57426 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 11:25:11.961093   57426 kubeadm.go:636] restartCluster start
	I0925 11:25:11.961142   57426 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 11:25:11.971923   57426 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:11.972450   57426 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-694015" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:25:11.972749   57426 kubeconfig.go:146] "old-k8s-version-694015" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
	I0925 11:25:11.973200   57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:25:11.974796   57426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 11:25:11.983812   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:11.983855   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:11.994861   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:11.994887   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:11.994937   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:12.005652   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:12.506376   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:12.506455   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:12.520081   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:13.006631   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:13.006695   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:13.019568   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:13.505914   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:13.506006   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:13.518385   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:14.005809   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:14.005874   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:14.019345   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:14.505870   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:14.505971   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:14.519278   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:15.005761   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:15.005847   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:15.019304   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:15.505775   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:15.505861   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:15.522069   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:16.006204   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:16.006301   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:16.019867   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:16.506529   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:16.506617   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:16.518437   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:17.006003   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:17.006072   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:17.017665   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:17.506193   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:17.506270   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:17.518866   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:18.006479   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:18.006549   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:18.018134   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:18.506718   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:18.506779   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:18.518368   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:19.005863   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:19.005914   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:19.019889   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:19.506525   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:19.506610   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:19.518123   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:20.006750   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:20.006834   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:20.018691   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:20.505853   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:20.505944   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:20.518163   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:21.005743   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:21.005799   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:21.018421   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:21.505927   57426 api_server.go:166] Checking apiserver status ...
	I0925 11:25:21.505992   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:25:21.518395   57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:25:21.984233   57426 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0925 11:25:21.984268   57426 kubeadm.go:1128] stopping kube-system containers ...
	I0925 11:25:21.984338   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:25:22.006278   57426 docker.go:463] Stopping containers: [6fc1a53ec6fe fd5a5b49ebb6 ae4bcf7dc2cb da81a748f8c6 18341e03937a c198cace2d43 2ea2541ac22c 4fbe3df9792c 8cd0717575c9 eedc3bc3189c c5ece3832a65 1b6622ab649f 8a8af2658d58 7aba7a4dd998]
	I0925 11:25:22.006354   57426 ssh_runner.go:195] Run: docker stop 6fc1a53ec6fe fd5a5b49ebb6 ae4bcf7dc2cb da81a748f8c6 18341e03937a c198cace2d43 2ea2541ac22c 4fbe3df9792c 8cd0717575c9 eedc3bc3189c c5ece3832a65 1b6622ab649f 8a8af2658d58 7aba7a4dd998
	I0925 11:25:22.030284   57426 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 11:25:22.048892   57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:25:22.058675   57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:25:22.058725   57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:25:22.069869   57426 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 11:25:22.069887   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:22.203346   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:23.343648   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.140263014s)
	I0925 11:25:23.343682   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:23.609027   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:23.759944   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:23.877711   57426 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:25:23.877795   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:23.894065   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:24.409145   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:24.909264   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:25.409155   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:25.908595   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:25:25.941174   57426 api_server.go:72] duration metric: took 2.063462682s to wait for apiserver process to appear ...
	I0925 11:25:25.941202   57426 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:25:25.941221   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:25:30.814959   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:25:30.814986   57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:25:30.814998   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:25:30.848727   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0925 11:25:30.848763   57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0925 11:25:31.349509   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:25:31.387359   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0925 11:25:31.387410   57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0925 11:25:31.848937   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:25:31.867183   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0925 11:25:31.867218   57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0925 11:25:32.349854   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:25:32.360469   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0925 11:25:32.369167   57426 api_server.go:141] control plane version: v1.16.0
	I0925 11:25:32.369203   57426 api_server.go:131] duration metric: took 6.427991735s to wait for apiserver health ...
	I0925 11:25:32.369217   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:25:32.369231   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:25:32.369242   57426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:25:32.380134   57426 system_pods.go:59] 7 kube-system pods found
	I0925 11:25:32.380171   57426 system_pods.go:61] "coredns-5644d7b6d9-5c2wq" [9b690088-7bfd-4691-b173-f4334779d35a] Running
	I0925 11:25:32.380184   57426 system_pods.go:61] "etcd-old-k8s-version-694015" [36dee6e4-aeee-4551-9d8b-1ca1bea32994] Running
	I0925 11:25:32.380196   57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [90dc280a-6164-49e3-85e7-1c65362aedc4] Running
	I0925 11:25:32.380209   57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [d9517a82-2ba1-4805-b8da-9e5b2ac42e3f] Running
	I0925 11:25:32.380217   57426 system_pods.go:61] "kube-proxy-tz4wl" [878e4f41-5b17-43b3-8f64-43a5f3f1b33f] Running
	I0925 11:25:32.380225   57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [b9b2adb4-7746-42df-a854-f4c222d53d98] Running
	I0925 11:25:32.380236   57426 system_pods.go:61] "storage-provisioner" [ecfa3d77-460f-4a09-b035-18707c06fed3] Running
	I0925 11:25:32.380250   57426 system_pods.go:74] duration metric: took 10.9971ms to wait for pod list to return data ...
	I0925 11:25:32.380264   57426 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:25:32.394660   57426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:25:32.394700   57426 node_conditions.go:123] node cpu capacity is 2
	I0925 11:25:32.394715   57426 node_conditions.go:105] duration metric: took 14.439734ms to run NodePressure ...
	I0925 11:25:32.394736   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:25:32.961075   57426 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 11:25:32.965086   57426 retry.go:31] will retry after 188.562442ms: kubelet not initialised
	I0925 11:25:33.160477   57426 retry.go:31] will retry after 370.071584ms: kubelet not initialised
	I0925 11:25:33.536011   57426 retry.go:31] will retry after 824.663389ms: kubelet not initialised
	I0925 11:25:34.365405   57426 retry.go:31] will retry after 810.880807ms: kubelet not initialised
	I0925 11:25:35.185131   57426 retry.go:31] will retry after 1.721240677s: kubelet not initialised
	I0925 11:25:36.911363   57426 retry.go:31] will retry after 2.193241834s: kubelet not initialised
	I0925 11:25:39.112946   57426 retry.go:31] will retry after 1.951980278s: kubelet not initialised
	I0925 11:25:41.071011   57426 retry.go:31] will retry after 6.193937978s: kubelet not initialised
	I0925 11:25:47.274201   57426 retry.go:31] will retry after 4.606339091s: kubelet not initialised
	I0925 11:25:51.885465   57426 retry.go:31] will retry after 8.801943251s: kubelet not initialised
	I0925 11:26:00.693610   57426 retry.go:31] will retry after 12.468242279s: kubelet not initialised
	I0925 11:26:13.171303   57426 kubeadm.go:787] kubelet initialised
	I0925 11:26:13.171330   57426 kubeadm.go:788] duration metric: took 40.21022654s waiting for restarted kubelet to initialise ...
	I0925 11:26:13.171339   57426 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:26:13.179728   57426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.189191   57426 pod_ready.go:92] pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.189214   57426 pod_ready.go:81] duration metric: took 9.450882ms waiting for pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.189224   57426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.196774   57426 pod_ready.go:92] pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.196799   57426 pod_ready.go:81] duration metric: took 7.568804ms waiting for pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.196811   57426 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.203653   57426 pod_ready.go:92] pod "etcd-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.203673   57426 pod_ready.go:81] duration metric: took 6.854302ms waiting for pod "etcd-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.203685   57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.210092   57426 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.210112   57426 pod_ready.go:81] duration metric: took 6.417933ms waiting for pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.210123   57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.566312   57426 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.566341   57426 pod_ready.go:81] duration metric: took 356.208747ms waiting for pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.566354   57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tz4wl" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.966900   57426 pod_ready.go:92] pod "kube-proxy-tz4wl" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:13.966931   57426 pod_ready.go:81] duration metric: took 400.568203ms waiting for pod "kube-proxy-tz4wl" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:13.966944   57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:14.366660   57426 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
	I0925 11:26:14.366737   57426 pod_ready.go:81] duration metric: took 399.776351ms waiting for pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:14.366759   57426 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
	I0925 11:26:16.674664   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:19.173958   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:21.674537   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:23.674786   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:25.674931   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:27.675303   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:29.675699   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:32.174922   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:34.674412   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:36.674708   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:39.174788   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:41.674981   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:44.173921   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:46.673916   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:49.172901   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:51.174245   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:53.174435   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:55.673610   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:26:57.673747   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:00.173135   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:02.673309   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:04.674279   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:06.674799   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:08.674858   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:11.174786   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:13.673493   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:15.674090   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:18.175688   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:20.674888   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:22.679772   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:25.174721   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:27.674564   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:30.174086   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:32.174464   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:34.673511   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:36.674414   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:39.175305   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:41.673238   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:43.675950   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:46.174549   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:48.675418   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:51.174891   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:53.675016   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:56.173958   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:27:58.174407   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:00.174454   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:02.174841   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:04.175287   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:06.674679   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:09.173838   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:11.174091   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:13.174267   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:15.674829   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:18.175095   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.674171   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.674573   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:25.174611   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.673983   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.675459   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.173159   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:34.672934   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:36.673537   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:38.675023   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.172736   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:43.174138   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:45.174205   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:47.176223   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.674353   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:52.173594   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.173762   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:56.673626   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.673704   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.674496   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.676016   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.677117   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.173790   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.673547   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:12.173257   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.673817   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:17.173554   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.674607   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:22.173742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.674422   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:27.174742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:29.673522   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.674133   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.173962   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:36.175249   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:38.674512   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.172242   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:43.173423   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:45.174163   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.174974   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.673662   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.173811   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.673161   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:56.674157   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.174193   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:01.674624   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:04.179180   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:06.676262   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.174330   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:11.175516   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.673816   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.366919   57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:14.366953   57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:14.366991   57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:14.367015   57426 kubeadm.go:640] restartCluster took 5m2.405916758s
	W0925 11:30:14.367083   57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:30:14.367112   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0925 11:30:17.424908   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
	I0925 11:30:17.424975   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:17.439514   57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:30:17.449686   57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:30:17.460096   57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:30:17.460147   57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0925 11:30:17.622252   57426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0925 11:30:17.662261   57426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I0925 11:30:17.759764   57426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:30:30.749642   57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0925 11:30:30.749742   57426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:30:30.749858   57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:30:30.749944   57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:30:30.750021   57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:30:30.750109   57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:30:30.750191   57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:30:30.750247   57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0925 11:30:30.750371   57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:30:30.751913   57426 out.go:204]   - Generating certificates and keys ...
	I0925 11:30:30.752003   57426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:30:30.752119   57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:30:30.752232   57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:30:30.752318   57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:30:30.752414   57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:30:30.752468   57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:30:30.752570   57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:30:30.752681   57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:30:30.752781   57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:30:30.752890   57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:30:30.752940   57426 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:30:30.753020   57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:30:30.753090   57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:30:30.753154   57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:30:30.753251   57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:30:30.753324   57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:30:30.753398   57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:30:30.755107   57426 out.go:204]   - Booting up control plane ...
	I0925 11:30:30.755208   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:30:30.755334   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:30:30.755426   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:30:30.755500   57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:30:30.755652   57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:30:30.755754   57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
	I0925 11:30:30.755912   57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:30:30.756083   57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:30:30.756182   57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:30:30.756384   57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 11:30:30.756471   57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
	I0925 11:30:30.758173   57426 out.go:204]   - Configuring RBAC rules ...
	I0925 11:30:30.758310   57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:30:30.758487   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:30:30.758649   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:30:30.758810   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:30:30.758962   57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:30:30.759033   57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:30:30.759112   57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:30:30.759121   57426 kubeadm.go:322] 
	I0925 11:30:30.759191   57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:30:30.759205   57426 kubeadm.go:322] 
	I0925 11:30:30.759275   57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:30:30.759285   57426 kubeadm.go:322] 
	I0925 11:30:30.759329   57426 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:30:30.759379   57426 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:30:30.759421   57426 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:30:30.759429   57426 kubeadm.go:322] 
	I0925 11:30:30.759483   57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:30:30.759595   57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:30:30.759689   57426 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:30:30.759705   57426 kubeadm.go:322] 
	I0925 11:30:30.759821   57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0925 11:30:30.759962   57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:30:30.759977   57426 kubeadm.go:322] 
	I0925 11:30:30.760084   57426 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760216   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:30:30.760255   57426 kubeadm.go:322]     --control-plane 	  
	I0925 11:30:30.760264   57426 kubeadm.go:322] 
	I0925 11:30:30.760361   57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:30:30.760370   57426 kubeadm.go:322] 
	I0925 11:30:30.760469   57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760617   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:30:30.760630   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:30:30.760655   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:30:30.760693   57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:30:30.760827   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.760899   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.820984   57426 ops.go:34] apiserver oom_adj: -16
	I0925 11:30:31.034555   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.165894   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.768765   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.269393   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.768687   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.269126   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.768794   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.269149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.769469   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.268685   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.769384   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.269510   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.768848   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.268799   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.769259   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.269444   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.769081   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.269471   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.768795   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:40.269215   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:40.768992   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.269161   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.768782   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.269438   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.769149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.268490   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.768911   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.269363   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.769428   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.268548   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.769489   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:46.046613   57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
	I0925 11:30:46.046655   57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
	I0925 11:30:46.046676   57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.046764   57426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:30:46.048206   57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.048432   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:30:46.048574   57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:30:46.048644   57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048653   57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048678   57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
	I0925 11:30:46.048687   57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
	W0925 11:30:46.048690   57426 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:30:46.048698   57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
	W0925 11:30:46.048709   57426 addons.go:240] addon dashboard should already be in state true
	I0925 11:30:46.048720   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.048735   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048744   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048818   57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048847   57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
	W0925 11:30:46.048855   57426 addons.go:240] addon metrics-server should already be in state true
	I0925 11:30:46.048680   57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
	I0925 11:30:46.048796   57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:30:46.048888   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048935   57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:30:46.048944   57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
	I0925 11:30:46.048955   57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:30:46.048963   57426 cache.go:87] Successfully saved all images to host disk.
	I0925 11:30:46.049135   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.049144   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049162   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049168   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049183   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049247   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049260   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049320   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049333   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049505   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049555   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.072180   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0925 11:30:46.072238   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0925 11:30:46.072269   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0925 11:30:46.072356   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0925 11:30:46.072357   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0925 11:30:46.072696   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072776   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072860   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073344   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073364   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073496   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073756   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073762   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.073964   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074195   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074210   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074253   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074286   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074439   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074467   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074610   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074656   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074686   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074715   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074930   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.075069   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.075101   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.075234   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.075269   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.075582   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.075811   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.077659   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.077697   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.094611   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0925 11:30:46.097022   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0925 11:30:46.097145   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097460   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097748   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.097767   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098172   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.098314   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.098333   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098564   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.098618   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.099229   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.101256   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.103863   57426 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:30:46.102124   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.102436   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0925 11:30:46.106576   57426 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:30:46.105560   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.109500   57426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:30:46.108220   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:30:46.108845   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.110913   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.110969   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:30:46.110985   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.110999   57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.111011   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:30:46.111024   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.112450   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.112637   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.112839   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:30:46.112862   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.115509   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.115949   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.115983   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116123   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116214   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116253   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.116342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.116466   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.116484   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.116508   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116774   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116925   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.117104   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.117252   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.119073   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119413   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.119430   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119685   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.119854   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.120011   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.120148   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.127174   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0925 11:30:46.127843   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.128399   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.128428   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.128967   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.129155   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.129945   57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
	I0925 11:30:46.129977   57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:30:46.131741   57426 out.go:177] * Verifying Kubernetes components...
	I0925 11:30:46.133087   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:46.130848   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.134728   57426 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:30:46.136080   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:30:46.136097   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:30:46.136115   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.139231   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139692   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.139718   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139957   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.140113   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.140252   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.140377   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.147885   57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
	W0925 11:30:46.147907   57426 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:30:46.147934   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.148356   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.148384   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.173474   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0925 11:30:46.174243   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.174879   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.174900   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.176033   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.176694   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.176736   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.196631   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I0925 11:30:46.197107   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.197645   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.197665   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.198067   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.198270   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.200093   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.200354   57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.200371   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:30:46.200390   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.203486   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.203884   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.203998   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.204172   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.204342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.204489   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.204636   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.413931   57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.414008   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:30:46.416569   57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
	I0925 11:30:46.416586   57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.416594   57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:46.420795   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:46.484507   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:30:46.484532   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:30:46.532417   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:30:46.532443   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:30:46.575299   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:30:46.575317   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:30:46.595994   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.596018   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:30:46.652448   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0925 11:30:46.652473   57426 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:30:46.652480   57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
	I0925 11:30:46.652483   57426 cache_images.go:263] failed pushing to: 
	I0925 11:30:46.652504   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.652518   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.652957   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:46.652963   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.652991   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.653007   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.653020   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.653288   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.653304   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.705521   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.707099   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.712115   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:30:46.712134   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:30:46.762833   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.851711   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:30:46.851753   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:30:47.115165   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:30:47.115193   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:30:47.386363   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:30:47.386386   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:30:47.610468   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:30:47.610490   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:30:47.697559   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:30:47.697578   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:30:47.864150   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:30:47.864169   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:30:47.915917   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:47.915945   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:30:48.000793   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
	I0925 11:30:48.000836   57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0925 11:30:48.085411   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:48.190617   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
	I0925 11:30:48.190677   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.190691   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.191035   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.191056   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.191068   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.191078   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.192850   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.192853   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.192876   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.192885   57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
	I0925 11:30:48.465209   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.575177   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
	I0925 11:30:48.575232   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575246   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575181   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
	I0925 11:30:48.575317   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575328   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575540   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575560   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575570   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575579   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575635   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575742   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575772   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575781   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575789   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575797   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575878   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575903   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575911   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577345   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577384   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577406   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577435   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.577451   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.577940   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577944   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577964   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.298546   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
	I0925 11:30:49.298606   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.298628   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302266   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302272   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302307   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.302321   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.302331   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302655   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302695   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302717   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.304441   57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694015 addons enable metrics-server	
	
	
	I0925 11:30:49.306061   57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0925 11:30:49.307539   57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0925 11:30:50.940378   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.436796   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.437380   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.449840   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.938237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.937614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.937878   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:09.437807   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.939073   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:14.437620   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:16.938666   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:19.437732   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:21.938151   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:23.938328   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:26.439526   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:28.937508   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:30.943648   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:33.437428   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:35.438086   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:37.439039   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:39.442448   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.937237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.939282   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.438561   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.938598   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.938694   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:52.939141   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.438245   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:57.937434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.437596   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.437909   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.438109   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:06.438145   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:08.938681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:13.438614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:15.938889   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:18.438798   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:20.937670   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:22.938056   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.938180   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.938537   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.938993   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:30.939782   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.438287   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.438564   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.938062   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:40.438394   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:42.439143   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:44.938221   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:46.940247   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.940644   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:51.437686   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:53.438013   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:55.438473   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.939231   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:00.438636   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.937519   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.937631   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:07.436605   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:09.437297   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.438337   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.939288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:15.940496   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:18.440278   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:20.938819   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:22.939228   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.940142   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:27.440968   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:29.937681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:31.938903   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.438342   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:36.938434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.437659   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:41.438288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:43.937112   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:45.939462   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:47.439176   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439201   57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.439210   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439218   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.441757   57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441785   57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.441797   57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441806   57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.447728   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447759   57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.447770   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447777   57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:47.447809   57426 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:47.447887   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:47.480326   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:47.480410   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:47.500790   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:47.500883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:47.521967   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:47.522043   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:47.542833   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:47.542921   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:47.564220   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:47.564296   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:47.585142   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:47.585233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:47.604606   57426 logs.go:284] 0 containers: []
	W0925 11:33:47.604638   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:47.604734   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:47.634903   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:47.634987   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:47.659599   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:47.659654   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:47.659677   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:47.713402   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:47.713441   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:47.746308   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:47.746347   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:47.777953   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:47.777991   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:47.933013   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:47.933041   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:47.959588   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:47.959623   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:47.989240   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:47.989285   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:48.069991   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:48.070022   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:48.107511   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.108197   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108438   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108657   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:48.109661   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.109891   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.110800   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.111045   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111291   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111524   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112518   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.112765   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112989   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113221   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113444   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113656   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113877   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.114848   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.115076   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115297   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115517   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115743   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115978   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.116194   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.148933   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.150648   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.152304   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:48.152321   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:48.170706   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:48.170735   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:48.204533   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:48.204574   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:48.242201   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:48.242239   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:48.305874   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:48.305916   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:48.375041   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375074   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:48.375130   57426 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 11:33:48.375142   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375161   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375169   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375176   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	  Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.375185   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	  Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.375190   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375199   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:33:58.376816   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:58.397417   57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
	I0925 11:33:58.397443   57426 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:58.397517   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:58.423312   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:58.423385   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:58.443439   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:58.443499   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:58.463360   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:58.463443   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:58.486151   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:58.486228   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:58.507009   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:58.507095   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:58.525571   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:58.525647   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:58.542397   57426 logs.go:284] 0 containers: []
	W0925 11:33:58.542424   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:58.542481   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:58.562186   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:58.562260   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:58.580984   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:58.581014   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:58.581030   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:58.731921   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:58.731958   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:58.759982   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:58.760017   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:58.817088   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:58.817120   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:58.851581   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.852006   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852226   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:58.853080   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.853245   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.853866   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.854027   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854211   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854408   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855047   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.855223   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855403   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855601   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855811   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856008   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856210   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856868   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.857032   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857219   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857418   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857616   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857814   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.858011   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.889357   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:58.891108   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:58.893160   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:58.893178   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:58.927223   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:58.927264   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:58.951343   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:58.951376   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:58.979268   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:58.979303   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:59.010031   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:59.010059   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:59.050333   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:59.050367   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:59.093782   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:59.093820   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:59.118196   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:59.118222   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:59.228267   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:59.228306   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:59.247426   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247459   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:59.247517   57426 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 11:33:59.247534   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247545   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247554   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247563   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	  Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:59.247574   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	  Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:59.247584   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247597   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:09.249955   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:34:09.256612   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0925 11:34:09.257809   57426 api_server.go:141] control plane version: v1.16.0
	I0925 11:34:09.257827   57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
	I0925 11:34:09.257833   57426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:34:09.257883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:34:09.280149   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:34:09.280233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:34:09.300127   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:34:09.300211   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:34:09.332581   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:34:09.332656   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:34:09.352994   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:34:09.353061   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:34:09.374892   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:34:09.374960   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:34:09.395820   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:34:09.395884   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:34:09.414225   57426 logs.go:284] 0 containers: []
	W0925 11:34:09.414245   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:34:09.414284   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:34:09.434336   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:34:09.434398   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:34:09.456185   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:34:09.456218   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:34:09.456231   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:34:09.590378   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:34:09.590409   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:34:09.617599   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:34:09.617624   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:34:09.643431   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:34:09.643459   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:34:09.665103   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:34:09.665129   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:34:09.693931   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:34:09.693963   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:34:09.742784   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:34:09.742812   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:34:09.804145   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:34:09.804177   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:34:09.818586   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:34:09.818609   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:34:09.857846   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:34:09.857875   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:34:09.880799   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:34:09.880828   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:34:09.950547   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:34:09.950572   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:34:09.983084   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.983479   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983617   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983758   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:34:09.984405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.984547   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985367   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.985576   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985713   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985898   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986632   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.986786   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986945   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987132   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987279   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987469   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987663   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988255   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.988398   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988533   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988685   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988822   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988958   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.989093   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.020550   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.022302   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.024541   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:34:10.024558   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:34:10.053454   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053477   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:34:10.053524   57426 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 11:34:10.053535   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053543   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053551   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053557   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	  Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.053563   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	  Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.053568   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053573   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:20.061232   57426 system_pods.go:59] 8 kube-system pods found
	I0925 11:34:20.061260   57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.061267   57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.061271   57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.061277   57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.061284   57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.061288   57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.061295   57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.061300   57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.061307   57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
	I0925 11:34:20.061314   57426 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:34:20.064090   57426 default_sa.go:45] found service account: "default"
	I0925 11:34:20.064114   57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
	I0925 11:34:20.064123   57426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:34:20.068614   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.068644   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.068653   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.068674   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.068682   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.068690   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.068696   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.068707   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.068719   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.068739   57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.275900   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.275943   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.275952   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.275960   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.275967   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.275974   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.275982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.275992   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.276001   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.276021   57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.579425   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.579469   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.579480   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.579489   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.579497   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.579506   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.579513   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.579522   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.579531   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.579553   57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.024313   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.024351   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.024360   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.024365   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.024372   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.024381   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.024390   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.024401   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.024411   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.024428   57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.536419   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.536449   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.536460   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.536466   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.536470   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.536476   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.536480   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.536486   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.536492   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.536506   57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.027728   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.027766   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.027776   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.027783   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.027787   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.027796   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.027804   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.027814   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.027822   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.027838   57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.714282   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.714315   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.714326   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.714335   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.714342   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.714349   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.714354   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.714365   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.714381   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.714399   57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
	I0925 11:34:23.438829   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:23.438855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:23.438862   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:23.438867   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:23.438872   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:23.438877   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:23.438882   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:23.438891   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:23.438898   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:23.438912   57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
	I0925 11:34:24.724821   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:24.724855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:24.724864   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:24.724871   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:24.724878   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:24.724887   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:24.724894   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:24.724904   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:24.724919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:24.724942   57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
	I0925 11:34:26.488127   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:26.488156   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:26.488163   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:26.488182   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:26.488203   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:26.488213   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:26.488222   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:26.488232   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:26.488247   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:26.488266   57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
	I0925 11:34:27.921755   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:27.921783   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:27.921790   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:27.921795   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:27.921800   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:27.921805   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:27.921810   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:27.921815   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:27.921821   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:27.921835   57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
	I0925 11:34:29.885748   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:29.885776   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:29.885783   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:29.885789   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:29.885794   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:29.885799   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:29.885803   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:29.885810   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:29.885815   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:29.885830   57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
	I0925 11:34:32.946353   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:32.946383   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:32.946391   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:32.946396   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:32.946401   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:32.946406   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:32.946410   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:32.946416   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:32.946421   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:32.946434   57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
	I0925 11:34:36.713729   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:36.713754   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:36.713761   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:36.713767   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:36.713772   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:36.713777   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:36.713781   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:36.713788   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:36.713793   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:36.713807   57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
	I0925 11:34:41.454464   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:41.454492   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:41.454498   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:41.454503   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:41.454508   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:41.454513   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:41.454518   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:41.454524   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:41.454529   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:41.454542   57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
	I0925 11:34:46.159214   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:46.159255   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:46.159266   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:46.159275   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:46.159282   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:46.159292   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:46.159299   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:46.159314   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:46.159328   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:46.159350   57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
	I0925 11:34:51.672849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:51.672877   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:51.672884   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:51.672889   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:51.672894   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:51.672899   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:51.672905   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:51.672914   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:51.672919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:51.672933   57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
	I0925 11:34:59.936057   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:59.936086   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:59.936094   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:59.936099   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:59.936104   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:59.936109   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:59.936114   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:59.936119   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:59.936125   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:59.936139   57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
	I0925 11:35:09.479385   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:09.479413   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:09.479420   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:09.479428   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:09.479433   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:09.479441   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:09.479446   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:09.479452   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:09.479459   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:09.479471   57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
	I0925 11:35:22.964926   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:22.964955   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:22.964962   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:22.964967   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:22.964972   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:22.964977   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:22.964982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:22.964988   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:22.964993   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:22.965006   57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
	I0925 11:35:37.171988   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:37.172022   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:37.172034   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:37.172041   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:37.172048   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:37.172055   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:37.172061   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:37.172072   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:37.172083   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:37.172101   57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
	I0925 11:35:54.452675   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:54.452702   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:54.452709   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:54.452714   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:54.452719   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:54.452727   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:54.452731   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:54.452738   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:54.452743   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:54.452756   57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
	I0925 11:36:22.755662   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:22.755700   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:22.755710   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:22.755718   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:22.755724   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:22.755732   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:22.755746   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:22.755761   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:22.755771   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:22.755791   57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
	I0925 11:36:58.289849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:58.289887   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:58.289896   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:58.289901   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:58.289910   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:58.289919   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:58.289927   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:58.289939   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:58.289950   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:58.289971   57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
	I0925 11:37:42.356673   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:37:42.356698   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:37:42.356705   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:37:42.356710   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:37:42.356715   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:37:42.356721   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:37:42.356725   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:37:42.356731   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:37:42.356736   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:37:42.356752   57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
	I0925 11:38:30.124408   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:38:30.124436   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:38:30.124443   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:38:30.124449   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:38:30.124454   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:38:30.124459   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:38:30.124464   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:38:30.124470   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:38:30.124475   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:38:30.124490   57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
	I0925 11:39:18.680525   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:39:18.680555   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:39:18.680561   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:39:18.680567   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:39:18.680572   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:39:18.680578   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:39:18.680582   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:39:18.680589   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:39:18.680594   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:39:18.680607   57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
	I0925 11:40:11.783486   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:40:11.783513   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:40:11.783520   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:40:11.783527   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:40:11.783532   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:40:11.783537   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:40:11.783542   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:40:11.783548   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:40:11.783553   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:40:11.786119   57426 out.go:177] 
	W0925 11:40:11.787697   57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W0925 11:40:11.787711   57426 out.go:239] * 
	* 
	W0925 11:40:11.788461   57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:40:11.790057   57426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-785493 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | disable-driver-mounts-785493                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-094323            | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-094323                 | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-863905 sudo                              | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-094323 sudo                             | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 11:28:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:28:19.035134   59899 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:28:19.035380   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035388   59899 out.go:309] Setting ErrFile to fd 2...
	I0925 11:28:19.035392   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035594   59899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 11:28:19.036084   59899 out.go:303] Setting JSON to false
	I0925 11:28:19.037024   59899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4250,"bootTime":1695637049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:28:19.037076   59899 start.go:138] virtualization: kvm guest
	I0925 11:28:19.039385   59899 out.go:177] * [embed-certs-094323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:28:19.041106   59899 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:28:19.041220   59899 notify.go:220] Checking for updates...
	I0925 11:28:19.042531   59899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:28:19.043924   59899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:19.045264   59899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 11:28:19.046665   59899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:28:19.047943   59899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:28:19.049713   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:19.050284   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.050336   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.066768   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0925 11:28:19.067166   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.067840   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.067866   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.068328   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.068548   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.069227   59899 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:28:19.070747   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.070796   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.084889   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0925 11:28:19.085259   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.085647   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.085666   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.085966   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.086156   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.120695   59899 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 11:28:19.122195   59899 start.go:298] selected driver: kvm2
	I0925 11:28:19.122213   59899 start.go:902] validating driver "kvm2" against &{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.122331   59899 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:28:19.122990   59899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.123070   59899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 11:28:19.137559   59899 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0925 11:28:19.137967   59899 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:28:19.138031   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:19.138049   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:19.138061   59899 start_flags.go:321] config:
	{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.138243   59899 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.139914   59899 out.go:177] * Starting control plane node embed-certs-094323 in cluster embed-certs-094323
	I0925 11:28:19.141213   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:19.141251   59899 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 11:28:19.141267   59899 cache.go:57] Caching tarball of preloaded images
	I0925 11:28:19.141342   59899 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 11:28:19.141351   59899 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 11:28:19.141434   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:19.141593   59899 start.go:365] acquiring machines lock for embed-certs-094323: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:28:19.141630   59899 start.go:369] acquired machines lock for "embed-certs-094323" in 22.488µs
	I0925 11:28:19.141643   59899 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:28:19.141651   59899 fix.go:54] fixHost starting: 
	I0925 11:28:19.141918   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.141948   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.155211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0925 11:28:19.155620   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.156032   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.156055   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.156384   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.156590   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.156767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:28:19.158188   59899 fix.go:102] recreateIfNeeded on embed-certs-094323: state=Stopped err=<nil>
	I0925 11:28:19.158223   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	W0925 11:28:19.158395   59899 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:28:19.160159   59899 out.go:177] * Restarting existing kvm2 VM for "embed-certs-094323" ...
	I0925 11:28:15.403806   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:17.404448   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.405067   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:15.674829   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:18.175095   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.492932   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.991315   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.161340   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Start
	I0925 11:28:19.161501   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring networks are active...
	I0925 11:28:19.162257   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network default is active
	I0925 11:28:19.162588   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network mk-embed-certs-094323 is active
	I0925 11:28:19.163048   59899 main.go:141] libmachine: (embed-certs-094323) Getting domain xml...
	I0925 11:28:19.163763   59899 main.go:141] libmachine: (embed-certs-094323) Creating domain...
	I0925 11:28:20.442361   59899 main.go:141] libmachine: (embed-certs-094323) Waiting to get IP...
	I0925 11:28:20.443271   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.443734   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.443823   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.443734   59935 retry.go:31] will retry after 267.692283ms: waiting for machine to come up
	I0925 11:28:20.713388   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.713952   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.713983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.713901   59935 retry.go:31] will retry after 277.980932ms: waiting for machine to come up
	I0925 11:28:20.993556   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.994198   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.994234   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.994172   59935 retry.go:31] will retry after 459.010271ms: waiting for machine to come up
	I0925 11:28:21.454879   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.455430   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.455461   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.455383   59935 retry.go:31] will retry after 366.809435ms: waiting for machine to come up
	I0925 11:28:21.824207   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.824773   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.824806   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.824720   59935 retry.go:31] will retry after 488.071541ms: waiting for machine to come up
	I0925 11:28:22.314305   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:22.314790   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:22.314818   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:22.314762   59935 retry.go:31] will retry after 945.003407ms: waiting for machine to come up
	I0925 11:28:23.261899   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:23.262367   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:23.262409   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:23.262317   59935 retry.go:31] will retry after 1.092936458s: waiting for machine to come up
	I0925 11:28:21.407022   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:23.905338   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.674171   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.674573   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:25.174611   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.991430   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.491751   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.357394   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:24.358014   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:24.358072   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:24.357975   59935 retry.go:31] will retry after 1.364274695s: waiting for machine to come up
	I0925 11:28:25.723341   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:25.723819   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:25.723848   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:25.723762   59935 retry.go:31] will retry after 1.588423993s: waiting for machine to come up
	I0925 11:28:27.313769   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:27.314265   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:27.314299   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:27.314211   59935 retry.go:31] will retry after 1.537433598s: waiting for machine to come up
	I0925 11:28:28.853890   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:28.854449   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:28.854472   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:28.854378   59935 retry.go:31] will retry after 2.010519573s: waiting for machine to come up
	I0925 11:28:26.405198   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:28.409892   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.673983   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.675459   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.492466   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:31.493901   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:30.867498   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:30.868057   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:30.868084   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:30.868021   59935 retry.go:31] will retry after 2.230830763s: waiting for machine to come up
	I0925 11:28:33.100983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:33.101572   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:33.101612   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:33.101515   59935 retry.go:31] will retry after 4.360204715s: waiting for machine to come up
	I0925 11:28:30.903969   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.905907   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.173159   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:34.672934   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:33.990422   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:35.990706   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.992428   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.463184   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463720   59899 main.go:141] libmachine: (embed-certs-094323) Found IP for machine: 192.168.39.111
	I0925 11:28:37.463748   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has current primary IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463757   59899 main.go:141] libmachine: (embed-certs-094323) Reserving static IP address...
	I0925 11:28:37.464174   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.464215   59899 main.go:141] libmachine: (embed-certs-094323) DBG | skip adding static IP to network mk-embed-certs-094323 - found existing host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"}
	I0925 11:28:37.464230   59899 main.go:141] libmachine: (embed-certs-094323) Reserved static IP address: 192.168.39.111
	I0925 11:28:37.464248   59899 main.go:141] libmachine: (embed-certs-094323) Waiting for SSH to be available...
	I0925 11:28:37.464264   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Getting to WaitForSSH function...
	I0925 11:28:37.466402   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.466816   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.466843   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.467015   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH client type: external
	I0925 11:28:37.467053   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa (-rw-------)
	I0925 11:28:37.467087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0925 11:28:37.467100   59899 main.go:141] libmachine: (embed-certs-094323) DBG | About to run SSH command:
	I0925 11:28:37.467136   59899 main.go:141] libmachine: (embed-certs-094323) DBG | exit 0
	I0925 11:28:37.556399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | SSH cmd err, output: <nil>: 
	I0925 11:28:37.556778   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetConfigRaw
	I0925 11:28:37.557414   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.560030   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560395   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.560428   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560640   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:37.560845   59899 machine.go:88] provisioning docker machine ...
	I0925 11:28:37.560864   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:37.561073   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561221   59899 buildroot.go:166] provisioning hostname "embed-certs-094323"
	I0925 11:28:37.561235   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561420   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.563597   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.563895   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.563925   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.564030   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.564225   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564405   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564531   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.564705   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.565158   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.565180   59899 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-094323 && echo "embed-certs-094323" | sudo tee /etc/hostname
	I0925 11:28:37.695364   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-094323
	
	I0925 11:28:37.695398   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.698664   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699091   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.699124   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699344   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.699550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699717   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699901   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.700108   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.700483   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.700503   59899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-094323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-094323/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-094323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:28:37.824658   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:28:37.824711   59899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
	I0925 11:28:37.824734   59899 buildroot.go:174] setting up certificates
	I0925 11:28:37.824745   59899 provision.go:83] configureAuth start
	I0925 11:28:37.824759   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.825074   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.827695   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.828131   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828262   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.830526   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830866   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.830897   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830986   59899 provision.go:138] copyHostCerts
	I0925 11:28:37.831038   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
	I0925 11:28:37.831050   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
	I0925 11:28:37.831116   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
	I0925 11:28:37.831199   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
	I0925 11:28:37.831208   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
	I0925 11:28:37.831231   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
	I0925 11:28:37.831315   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
	I0925 11:28:37.831322   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
	I0925 11:28:37.831343   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
	I0925 11:28:37.831388   59899 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.embed-certs-094323 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube embed-certs-094323]
	I0925 11:28:37.908612   59899 provision.go:172] copyRemoteCerts
	I0925 11:28:37.908700   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:28:37.908735   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.911729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912109   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.912140   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912334   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.912534   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.912716   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.912845   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:37.998547   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:28:38.026509   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0925 11:28:38.050201   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:28:38.074649   59899 provision.go:86] duration metric: configureAuth took 249.890915ms
	I0925 11:28:38.074676   59899 buildroot.go:189] setting minikube options for container-runtime
	I0925 11:28:38.074944   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:38.074975   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:38.075242   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.078170   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078528   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.078567   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078795   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.078989   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079174   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079356   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.079539   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.079964   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.079984   59899 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 11:28:38.198741   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 11:28:38.198765   59899 buildroot.go:70] root file system type: tmpfs
	I0925 11:28:38.198890   59899 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 11:28:38.198915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.201807   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202182   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.202213   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.202547   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202711   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202847   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.202992   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.203346   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.203422   59899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 11:28:38.330031   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 11:28:38.330061   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.333195   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333537   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.333568   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333754   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.333924   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334109   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334259   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.334428   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.334869   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.334898   59899 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 11:28:35.403941   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.405325   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:36.673537   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:38.675023   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:39.250696   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 11:28:39.250732   59899 machine.go:91] provisioned docker machine in 1.689868908s
	I0925 11:28:39.250752   59899 start.go:300] post-start starting for "embed-certs-094323" (driver="kvm2")
	I0925 11:28:39.250766   59899 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:28:39.250786   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.251224   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:28:39.251260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.254399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.254904   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.254937   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.255093   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.255261   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.255432   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.255612   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.350663   59899 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:28:39.357361   59899 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 11:28:39.357388   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
	I0925 11:28:39.357464   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
	I0925 11:28:39.357582   59899 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
	I0925 11:28:39.357712   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:28:39.374752   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:39.407365   59899 start.go:303] post-start completed in 156.599445ms
	I0925 11:28:39.407390   59899 fix.go:56] fixHost completed within 20.265737349s
	I0925 11:28:39.407412   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.409869   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410204   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.410246   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.410526   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410672   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410817   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.411004   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:39.411443   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:39.411457   59899 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 11:28:39.525878   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641319.473578694
	
	I0925 11:28:39.525906   59899 fix.go:206] guest clock: 1695641319.473578694
	I0925 11:28:39.525916   59899 fix.go:219] Guest: 2023-09-25 11:28:39.473578694 +0000 UTC Remote: 2023-09-25 11:28:39.407394176 +0000 UTC m=+20.400726255 (delta=66.184518ms)
	I0925 11:28:39.525941   59899 fix.go:190] guest clock delta is within tolerance: 66.184518ms
	I0925 11:28:39.525949   59899 start.go:83] releasing machines lock for "embed-certs-094323", held for 20.384309776s
	I0925 11:28:39.525980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.526255   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:39.528977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.529375   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529553   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530157   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530430   59899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:28:39.530480   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.530741   59899 ssh_runner.go:195] Run: cat /version.json
	I0925 11:28:39.530766   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.533347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533598   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533796   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.533834   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534008   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.534017   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534033   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534116   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534397   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534497   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534546   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534701   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.534716   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.619280   59899 ssh_runner.go:195] Run: systemctl --version
	I0925 11:28:39.651081   59899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 11:28:39.656908   59899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 11:28:39.656977   59899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:28:39.674233   59899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:28:39.674259   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.674415   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:39.693891   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 11:28:39.704196   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 11:28:39.714537   59899 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 11:28:39.714587   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 11:28:39.724833   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.734476   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 11:28:39.744763   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.755865   59899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 11:28:39.765565   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 11:28:39.775652   59899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 11:28:39.785628   59899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 11:28:39.794828   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:39.915710   59899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 11:28:39.933084   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.933164   59899 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 11:28:39.949304   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.963709   59899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:28:39.980784   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.994887   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.007408   59899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 11:28:40.034805   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.047786   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:40.066171   59899 ssh_runner.go:195] Run: which cri-dockerd
	I0925 11:28:40.070494   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 11:28:40.078000   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 11:28:40.093462   59899 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 11:28:40.197902   59899 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 11:28:40.313798   59899 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 11:28:40.313947   59899 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 11:28:40.330472   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:40.443989   59899 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:28:41.943902   59899 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49987353s)
	I0925 11:28:41.943995   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.063894   59899 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 11:28:42.177577   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.291042   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.407796   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 11:28:42.429673   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.553611   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 11:28:42.637258   59899 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 11:28:42.637336   59899 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 11:28:42.643315   59899 start.go:537] Will wait 60s for crictl version
	I0925 11:28:42.643380   59899 ssh_runner.go:195] Run: which crictl
	I0925 11:28:42.647521   59899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 11:28:42.709061   59899 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 11:28:42.709123   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:42.735005   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:39.992653   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.493405   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.763193   59899 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 11:28:42.763239   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:42.766116   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766453   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:42.766487   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766740   59899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0925 11:28:42.770645   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.782793   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:42.782837   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.805110   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.805135   59899 docker.go:594] Images already preloaded, skipping extraction
	I0925 11:28:42.805190   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.824840   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.824876   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:28:42.824941   59899 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 11:28:42.858255   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:42.858285   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:42.858303   59899 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 11:28:42.858319   59899 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-094323 NodeName:embed-certs-094323 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 11:28:42.858443   59899 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-094323"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 11:28:42.858508   59899 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-094323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 11:28:42.858563   59899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 11:28:42.868791   59899 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 11:28:42.868861   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 11:28:42.878094   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0925 11:28:42.894185   59899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 11:28:42.910390   59899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0925 11:28:42.929194   59899 ssh_runner.go:195] Run: grep 192.168.39.111	control-plane.minikube.internal$ /etc/hosts
	I0925 11:28:42.933290   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.946061   59899 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323 for IP: 192.168.39.111
	I0925 11:28:42.946095   59899 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:42.946253   59899 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
	I0925 11:28:42.946292   59899 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
	I0925 11:28:42.946354   59899 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/client.key
	I0925 11:28:42.946414   59899 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key.f4aa454f
	I0925 11:28:42.946448   59899 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key
	I0925 11:28:42.946581   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
	W0925 11:28:42.946628   59899 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
	I0925 11:28:42.946648   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 11:28:42.946675   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
	I0925 11:28:42.946706   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
	I0925 11:28:42.946743   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
	I0925 11:28:42.946793   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:42.947417   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 11:28:42.970517   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 11:28:42.995598   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 11:28:43.019025   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 11:28:43.044246   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 11:28:43.068806   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 11:28:43.093317   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 11:28:43.117196   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 11:28:43.140309   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
	I0925 11:28:43.164129   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
	I0925 11:28:43.187747   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 11:28:43.211759   59899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 11:28:43.229751   59899 ssh_runner.go:195] Run: openssl version
	I0925 11:28:43.235370   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
	I0925 11:28:43.244462   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249084   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249131   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.254522   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
	I0925 11:28:43.263996   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
	I0925 11:28:43.273424   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278155   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278194   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.283762   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 11:28:43.293817   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 11:28:43.303828   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309173   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309215   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.315555   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 11:28:43.325092   59899 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 11:28:43.329555   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 11:28:43.335420   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 11:28:43.341663   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 11:28:43.347218   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 11:28:43.352934   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 11:28:43.359116   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 11:28:43.364415   59899 kubeadm.go:404] StartCluster: {Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:43.364539   59899 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:43.383931   59899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 11:28:43.393096   59899 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 11:28:43.393114   59899 kubeadm.go:636] restartCluster start
	I0925 11:28:43.393149   59899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 11:28:43.402414   59899 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.403165   59899 kubeconfig.go:135] verify returned: extract IP: "embed-certs-094323" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:43.403590   59899 kubeconfig.go:146] "embed-certs-094323" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
	I0925 11:28:43.404176   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:43.405944   59899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 11:28:43.413960   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.414004   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.424035   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.424049   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.424076   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.435299   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.935935   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.936031   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.947516   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:39.905311   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.908598   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.404783   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.172736   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:43.174138   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:45.174205   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.990934   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:46.991805   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.435537   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.435624   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.447609   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:44.936220   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.936386   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.948140   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.435733   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.435829   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.448013   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.935443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.935535   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.947333   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.435451   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.435515   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.447174   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.935705   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.935782   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.947562   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.436134   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.436202   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.447762   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.936080   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.936141   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.947832   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.435362   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.435430   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.446887   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.935379   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.935477   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.948793   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.904475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:48.905486   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:47.176223   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.674353   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.491562   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:51.492069   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.492471   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.436282   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.436396   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.447719   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:49.936050   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.936137   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.948346   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.435443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.435524   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.446725   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.936401   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.936479   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.948716   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.436316   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.436391   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.447984   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.936106   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.936183   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.951846   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.435363   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.435459   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.447499   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.936093   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.936170   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.948743   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:53.414466   59899 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0925 11:28:53.414503   59899 kubeadm.go:1128] stopping kube-system containers ...
	I0925 11:28:53.414561   59899 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:53.436706   59899 docker.go:463] Stopping containers: [5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3]
	I0925 11:28:53.436785   59899 ssh_runner.go:195] Run: docker stop 5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3
	I0925 11:28:53.460993   59899 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 11:28:53.476266   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:28:53.485682   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:28:53.485753   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495238   59899 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495259   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:53.625292   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:51.404218   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.404644   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:52.173594   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.173762   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:55.992677   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.491954   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.299318   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.496012   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.595147   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.679425   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:28:54.679506   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:54.698114   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.211538   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.711672   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.211025   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.711636   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.734459   59899 api_server.go:72] duration metric: took 2.055031465s to wait for apiserver process to appear ...
	I0925 11:28:56.734482   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:28:56.734499   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735092   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:56.735125   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735727   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:57.236460   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:55.405884   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:57.904799   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:56.673626   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.673704   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.709537   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.709569   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.709581   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.795585   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.795613   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.795624   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.911357   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:00.911393   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.236809   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.242260   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.242286   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.735856   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.743534   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.743563   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:02.236812   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:02.247395   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:29:02.257253   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:29:02.257277   59899 api_server.go:131] duration metric: took 5.522789199s to wait for apiserver health ...
	I0925 11:29:02.257286   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:29:02.257297   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:02.258988   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:29:00.496638   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.992616   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.260493   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:29:02.275303   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:29:02.297272   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:29:02.308818   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:29:02.308855   59899 system_pods.go:61] "coredns-5dd5756b68-7kfz5" [9225f684-4ad2-462b-a20b-13dd27aad56f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:29:02.308868   59899 system_pods.go:61] "etcd-embed-certs-094323" [5603d9a0-390a-4cf1-ad8f-a976016d96e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0925 11:29:02.308879   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [eb928fb0-77a3-45c5-81ce-03ffcb288548] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0925 11:29:02.308889   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8ee4e42e-367a-4be8-9787-c6eb13913d8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0925 11:29:02.308900   59899 system_pods.go:61] "kube-proxy-5k6vp" [b5a3fb6d-bc10-4cde-a1f1-8c57a1fa480b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:29:02.308911   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [4e15edd2-b5f1-4441-b940-2055f20354d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0925 11:29:02.308926   59899 system_pods.go:61] "metrics-server-57f55c9bc5-xcns4" [32a1d71d-7f4d-466a-b745-d2fdf6a88570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:29:02.308942   59899 system_pods.go:61] "storage-provisioner" [91ac60cc-4154-4e62-aa3e-6c492764d7f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:29:02.308955   59899 system_pods.go:74] duration metric: took 11.663759ms to wait for pod list to return data ...
	I0925 11:29:02.308969   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:29:02.315279   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:29:02.315316   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:29:02.315329   59899 node_conditions.go:105] duration metric: took 6.35463ms to run NodePressure ...
	I0925 11:29:02.315351   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:29:02.598238   59899 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603645   59899 kubeadm.go:787] kubelet initialised
	I0925 11:29:02.603673   59899 kubeadm.go:788] duration metric: took 5.409805ms waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603682   59899 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:02.609652   59899 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.616919   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616945   59899 pod_ready.go:81] duration metric: took 7.267055ms waiting for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.616957   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616966   59899 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.626927   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626952   59899 pod_ready.go:81] duration metric: took 9.977984ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.626964   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626975   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.635040   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635057   59899 pod_ready.go:81] duration metric: took 8.069751ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.635065   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635071   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.701570   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701594   59899 pod_ready.go:81] duration metric: took 66.51566ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.701604   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701614   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:00.404282   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.407062   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.674496   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.676016   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.677117   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:05.005683   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.491820   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.513619   59899 pod_ready.go:92] pod "kube-proxy-5k6vp" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:04.513641   59899 pod_ready.go:81] duration metric: took 1.812019136s waiting for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:04.513650   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:06.610704   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:08.610973   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.905976   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.404291   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.408011   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.173790   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.673547   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.492854   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.991906   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.110562   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:13.112908   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.905538   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.404450   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:12.173257   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.673817   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.492243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.991655   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.610905   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:14.610923   59899 pod_ready.go:81] duration metric: took 10.097268131s waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:14.610932   59899 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:16.629749   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.412718   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.906798   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:17.173554   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.674607   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.992367   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.491588   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.130001   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.629643   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.403543   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.405654   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:22.173742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.674422   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.992075   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.491409   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.492221   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.129530   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.629049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.629817   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:25.909201   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.403475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:27.174742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:29.673522   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.990733   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.492080   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.128865   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.129900   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.405115   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:32.904179   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.674133   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.173962   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.990697   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.991964   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.629757   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.630073   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.905517   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.405590   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:36.175249   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:38.674512   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.490747   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.991730   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.129932   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.628523   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:39.904204   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.905925   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.406994   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.172242   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:43.173423   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:45.174163   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.992082   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.491243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.629935   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.129139   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:46.904285   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.409716   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.174974   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.673662   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.993800   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.491813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.130049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.628211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.629350   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.905344   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.905370   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.173811   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.673161   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.493022   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:56.993331   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.629518   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:57.629571   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.909272   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.403659   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.407567   57752 pod_ready.go:81] duration metric: took 4m0.000815308s waiting for pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:58.407592   57752 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:29:58.407601   57752 pod_ready.go:38] duration metric: took 4m6.831828709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:58.407622   57752 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:29:58.407686   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:29:58.442532   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:29:58.442627   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:29:58.466398   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:29:58.466474   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:29:58.488629   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:29:58.488710   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:29:58.515985   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:29:58.516069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:29:58.551483   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:29:58.551593   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:29:58.575447   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:29:58.575518   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:29:58.595332   57752 logs.go:284] 0 containers: []
	W0925 11:29:58.595354   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:29:58.595406   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:29:58.616993   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:29:58.617053   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:29:58.641655   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:29:58.641682   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:29:58.641692   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:29:58.697709   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:29:58.697746   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:29:58.720902   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:29:58.720930   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:29:58.812571   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:29:58.812609   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:29:58.833650   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:29:58.833678   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:29:58.888959   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:29:58.888999   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:29:58.924906   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:29:58.924934   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:29:58.951722   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:29:58.951750   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:29:58.975890   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:29:58.975912   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:29:59.042048   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:29:59.042077   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:29:59.090056   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:29:59.090083   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:29:59.118231   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:29:59.118257   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:29:59.141561   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:29:59.141584   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:29:59.168388   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:29:59.168420   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:29:59.202331   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:29:59.202355   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:29:59.278282   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:29:59.278317   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:29:59.431326   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:29:59.431356   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:29:59.462487   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:29:59.462516   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:29:59.506895   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:29:59.506927   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:29:59.551311   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:29:59.551351   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:29:56.674157   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.174193   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.490645   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:01.491108   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:03.491826   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:00.130429   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.630390   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.085037   57752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:02.106600   57752 api_server.go:72] duration metric: took 4m14.069395428s to wait for apiserver process to appear ...
	I0925 11:30:02.106631   57752 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:02.106709   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:02.131534   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:02.131610   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:02.154915   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:02.154979   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:02.178047   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:02.178108   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:02.202658   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:02.202754   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:02.224819   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:02.224908   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:02.246587   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:02.246650   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:02.267013   57752 logs.go:284] 0 containers: []
	W0925 11:30:02.267037   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:02.267090   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:02.286403   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:02.286476   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:02.307111   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:02.307141   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:02.307154   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:02.347993   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:02.348022   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:02.370841   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:02.370875   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:02.396931   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:02.396954   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:02.438996   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:02.439025   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:02.464589   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:02.464621   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:02.492060   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:02.492087   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:02.558928   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:02.558959   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:02.654217   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:02.654246   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:02.669423   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:02.669453   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:02.802934   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:02.802959   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:02.835624   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:02.835649   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:02.866826   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:02.866849   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:02.898744   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:02.898775   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:02.934534   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:02.934567   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:02.972310   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:02.972337   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:03.005474   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:03.005499   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:03.027346   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:03.027368   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:03.099823   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:03.099857   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:03.124682   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:03.124717   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:01.674624   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:04.179180   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.991507   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:08.492917   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.129924   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:07.630929   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.663871   57752 api_server.go:253] Checking apiserver healthz at https://192.168.72.162:8443/healthz ...
	I0925 11:30:05.669416   57752 api_server.go:279] https://192.168.72.162:8443/healthz returned 200:
	ok
	I0925 11:30:05.670783   57752 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:05.670809   57752 api_server.go:131] duration metric: took 3.564170226s to wait for apiserver health ...
	I0925 11:30:05.670819   57752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:05.670872   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:05.693324   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:05.693399   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:05.717998   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:05.718069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:05.742708   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:05.742793   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:05.764298   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:05.764374   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:05.785970   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:05.786039   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:05.806950   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:05.807037   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:05.826462   57752 logs.go:284] 0 containers: []
	W0925 11:30:05.826487   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:05.826540   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:05.845927   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:05.845997   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:05.868573   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:05.868615   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:05.868629   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:05.909242   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:05.909270   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:05.959647   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:05.959680   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:05.988448   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:05.988480   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:06.067394   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:06.067429   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:06.084943   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:06.084971   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:06.238324   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:06.238357   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:06.273373   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:06.273403   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:06.303181   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:06.303211   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:06.365354   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:06.365398   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:06.391962   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:06.391989   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:06.415389   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:06.415412   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:06.441786   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:06.441809   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:06.479862   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:06.479892   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:06.507143   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:06.507186   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:06.546486   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:06.546514   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:06.591229   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:06.591258   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:06.616844   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:06.616869   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:06.705576   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:06.705606   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:06.742505   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:06.742533   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:09.274341   57752 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:09.274368   57752 system_pods.go:61] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.274373   57752 system_pods.go:61] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.274378   57752 system_pods.go:61] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.274383   57752 system_pods.go:61] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.274386   57752 system_pods.go:61] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.274390   57752 system_pods.go:61] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.274397   57752 system_pods.go:61] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.274402   57752 system_pods.go:61] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.274408   57752 system_pods.go:74] duration metric: took 3.603584218s to wait for pod list to return data ...
	I0925 11:30:09.274414   57752 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:09.276929   57752 default_sa.go:45] found service account: "default"
	I0925 11:30:09.276948   57752 default_sa.go:55] duration metric: took 2.5282ms for default service account to be created ...
	I0925 11:30:09.276954   57752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:09.282656   57752 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:09.282684   57752 system_pods.go:89] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.282690   57752 system_pods.go:89] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.282694   57752 system_pods.go:89] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.282699   57752 system_pods.go:89] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.282702   57752 system_pods.go:89] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.282706   57752 system_pods.go:89] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.282712   57752 system_pods.go:89] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.282721   57752 system_pods.go:89] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.282728   57752 system_pods.go:126] duration metric: took 5.769715ms to wait for k8s-apps to be running ...
	I0925 11:30:09.282734   57752 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:09.282774   57752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:09.296447   57752 system_svc.go:56] duration metric: took 13.70254ms WaitForService to wait for kubelet.
	I0925 11:30:09.296472   57752 kubeadm.go:581] duration metric: took 4m21.259281902s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:09.296496   57752 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:09.300312   57752 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:09.300337   57752 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:09.300350   57752 node_conditions.go:105] duration metric: took 3.848191ms to run NodePressure ...
	I0925 11:30:09.300362   57752 start.go:228] waiting for startup goroutines ...
	I0925 11:30:09.300371   57752 start.go:233] waiting for cluster config update ...
	I0925 11:30:09.300384   57752 start.go:242] writing updated cluster config ...
	I0925 11:30:09.300719   57752 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:09.350285   57752 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:09.353257   57752 out.go:177] * Done! kubectl is now configured to use "no-preload-863905" cluster and "default" namespace by default
	I0925 11:30:06.676262   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.174330   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:10.992813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.490354   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.636520   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:12.129471   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:11.175516   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.673816   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.366919   57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:14.366953   57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:14.366991   57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:14.367015   57426 kubeadm.go:640] restartCluster took 5m2.405916758s
	W0925 11:30:14.367083   57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:30:14.367112   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0925 11:30:15.494599   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.993167   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.130508   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:16.132437   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:18.631163   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.424908   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
	I0925 11:30:17.424975   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:17.439514   57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:30:17.449686   57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:30:17.460096   57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:30:17.460147   57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0925 11:30:17.622252   57426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0925 11:30:17.662261   57426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I0925 11:30:17.759764   57426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:30:20.493076   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:22.995449   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:21.130370   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:23.137540   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:24.792048   57927 pod_ready.go:81] duration metric: took 4m0.000079144s waiting for pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:24.792097   57927 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:24.792110   57927 pod_ready.go:38] duration metric: took 4m9.506946432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:24.792141   57927 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:30:24.792215   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:24.824238   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:24.824320   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:24.843686   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:24.843764   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:24.868292   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:24.868377   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:24.892540   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:24.892617   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:24.919019   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:24.919091   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:24.946855   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:24.946930   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:24.989142   57927 logs.go:284] 0 containers: []
	W0925 11:30:24.989168   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:24.989220   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:25.011261   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:25.011345   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:25.030950   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:25.030977   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:25.030989   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:25.120210   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:25.120239   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:25.152215   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:25.152243   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:25.194959   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:25.194997   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:25.229067   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:25.229094   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:25.256359   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:25.256386   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:25.280428   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:25.280459   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:25.330876   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:25.330902   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:25.353121   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:25.353148   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:25.375127   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:25.375154   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:25.402664   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:25.402690   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:25.428214   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:25.428238   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:25.509982   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:25.510015   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:25.525584   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:25.525623   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:25.696377   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:25.696402   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:25.734242   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:25.734271   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:25.763410   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:25.763436   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:25.797529   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:25.797556   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:25.843899   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:25.843927   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:25.896478   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:25.896507   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:28.465765   57927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:28.480996   57927 api_server.go:72] duration metric: took 4m15.769590927s to wait for apiserver process to appear ...
	I0925 11:30:28.481023   57927 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:28.481101   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:25.631323   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:28.129055   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.749642   57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0925 11:30:30.749742   57426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:30:30.749858   57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:30:30.749944   57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:30:30.750021   57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:30:30.750109   57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:30:30.750191   57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:30:30.750247   57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0925 11:30:30.750371   57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:30:30.751913   57426 out.go:204]   - Generating certificates and keys ...
	I0925 11:30:30.752003   57426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:30:30.752119   57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:30:30.752232   57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:30:30.752318   57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:30:30.752414   57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:30:30.752468   57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:30:30.752570   57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:30:30.752681   57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:30:30.752781   57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:30:30.752890   57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:30:30.752940   57426 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:30:30.753020   57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:30:30.753090   57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:30:30.753154   57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:30:30.753251   57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:30:30.753324   57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:30:30.753398   57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:30:30.755107   57426 out.go:204]   - Booting up control plane ...
	I0925 11:30:30.755208   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:30:30.755334   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:30:30.755426   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:30:30.755500   57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:30:30.755652   57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:30:30.755754   57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
	I0925 11:30:30.755912   57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:30:30.756083   57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:30:30.756182   57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:30:30.756384   57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 11:30:30.756471   57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
	I0925 11:30:28.509506   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:28.509575   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:28.532621   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:28.532723   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:28.554799   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:28.554878   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:28.574977   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:28.575048   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:28.596014   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:28.596094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:28.616627   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:28.616712   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:28.636762   57927 logs.go:284] 0 containers: []
	W0925 11:30:28.636782   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:28.636838   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:28.659028   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:28.659094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:28.680689   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:28.680722   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:28.680736   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:28.714051   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:28.714078   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:28.762170   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:28.762204   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:28.788343   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:28.788371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:28.869517   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:28.869548   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:29.002897   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:29.002920   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:29.032416   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:29.032444   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:29.063893   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:29.063921   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:29.089890   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:29.089916   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:29.132797   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:29.132827   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:29.155350   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:29.155371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:29.213418   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:29.213447   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:29.254863   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:29.254891   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:29.277677   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:29.277709   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:29.308393   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:29.308422   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:29.330968   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:29.330989   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:29.374515   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:29.374542   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:29.399946   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:29.399975   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:29.445837   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:29.445870   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:29.468320   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:29.468346   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.042767   57927 api_server.go:253] Checking apiserver healthz at https://192.168.61.208:8444/healthz ...
	I0925 11:30:32.048546   57927 api_server.go:279] https://192.168.61.208:8444/healthz returned 200:
	ok
	I0925 11:30:32.052014   57927 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:32.052036   57927 api_server.go:131] duration metric: took 3.571006059s to wait for apiserver health ...
	I0925 11:30:32.052046   57927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:32.052108   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:32.083762   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:32.083848   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:32.106317   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:32.106392   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:32.128245   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:32.128333   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:32.148973   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:32.149052   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:32.174028   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:32.174103   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:32.196115   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:32.196181   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:32.216678   57927 logs.go:284] 0 containers: []
	W0925 11:30:32.216702   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:32.216757   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:32.237388   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:32.237473   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:32.257088   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:32.257112   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:32.257122   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.327894   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:32.327929   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:32.365128   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:32.365156   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:32.394664   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:32.394703   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:32.450709   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:32.450737   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:32.512407   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:32.512442   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:32.602958   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:32.602985   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:32.646449   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:32.646478   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:32.693817   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:32.693843   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:32.728336   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:32.728364   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:32.754018   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:32.754053   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:32.791438   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:32.791473   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:32.813473   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:32.813501   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:32.827795   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:32.827824   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:32.862910   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:32.862934   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:32.899610   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:32.899642   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:32.941021   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:32.941056   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:33.072749   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:33.072786   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:33.105984   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:33.106016   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:33.132338   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:33.132366   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:30.629720   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:33.133383   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.758173   57426 out.go:204]   - Configuring RBAC rules ...
	I0925 11:30:30.758310   57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:30:30.758487   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:30:30.758649   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:30:30.758810   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:30:30.758962   57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:30:30.759033   57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:30:30.759112   57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:30:30.759121   57426 kubeadm.go:322] 
	I0925 11:30:30.759191   57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:30:30.759205   57426 kubeadm.go:322] 
	I0925 11:30:30.759275   57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:30:30.759285   57426 kubeadm.go:322] 
	I0925 11:30:30.759329   57426 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:30:30.759379   57426 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:30:30.759421   57426 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:30:30.759429   57426 kubeadm.go:322] 
	I0925 11:30:30.759483   57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:30:30.759595   57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:30:30.759689   57426 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:30:30.759705   57426 kubeadm.go:322] 
	I0925 11:30:30.759821   57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0925 11:30:30.759962   57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:30:30.759977   57426 kubeadm.go:322] 
	I0925 11:30:30.760084   57426 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760216   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:30:30.760255   57426 kubeadm.go:322]     --control-plane 	  
	I0925 11:30:30.760264   57426 kubeadm.go:322] 
	I0925 11:30:30.760361   57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:30:30.760370   57426 kubeadm.go:322] 
	I0925 11:30:30.760469   57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760617   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:30:30.760630   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:30:30.760655   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:30:30.760693   57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:30:30.760827   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.760899   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.820984   57426 ops.go:34] apiserver oom_adj: -16
	I0925 11:30:31.034555   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.165894   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.768765   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.269393   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.768687   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.269126   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.768794   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.269149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.769469   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.268685   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.664427   57927 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:35.664451   57927 system_pods.go:61] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.664456   57927 system_pods.go:61] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.664461   57927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.664466   57927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.664473   57927 system_pods.go:61] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.664479   57927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.664489   57927 system_pods.go:61] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.664507   57927 system_pods.go:61] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.664518   57927 system_pods.go:74] duration metric: took 3.612465435s to wait for pod list to return data ...
	I0925 11:30:35.664526   57927 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:35.669238   57927 default_sa.go:45] found service account: "default"
	I0925 11:30:35.669258   57927 default_sa.go:55] duration metric: took 4.728219ms for default service account to be created ...
	I0925 11:30:35.669266   57927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:35.677224   57927 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:35.677248   57927 system_pods.go:89] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.677254   57927 system_pods.go:89] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.677260   57927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.677265   57927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.677269   57927 system_pods.go:89] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.677273   57927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.677279   57927 system_pods.go:89] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.677285   57927 system_pods.go:89] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.677291   57927 system_pods.go:126] duration metric: took 8.021227ms to wait for k8s-apps to be running ...
	I0925 11:30:35.677301   57927 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:35.677340   57927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:35.696637   57927 system_svc.go:56] duration metric: took 19.327902ms WaitForService to wait for kubelet.
	I0925 11:30:35.696659   57927 kubeadm.go:581] duration metric: took 4m22.985262397s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:35.696712   57927 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:35.701675   57927 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:35.701709   57927 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:35.701719   57927 node_conditions.go:105] duration metric: took 4.999654ms to run NodePressure ...
	I0925 11:30:35.701730   57927 start.go:228] waiting for startup goroutines ...
	I0925 11:30:35.701737   57927 start.go:233] waiting for cluster config update ...
	I0925 11:30:35.701749   57927 start.go:242] writing updated cluster config ...
	I0925 11:30:35.702076   57927 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:35.751111   57927 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:35.753033   57927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-319133" cluster and "default" namespace by default
	I0925 11:30:35.134183   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:37.629084   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:35.769384   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.269510   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.768848   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.268799   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.769259   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.269444   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.769081   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.269471   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.768795   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:40.269215   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.631655   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:42.128083   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:40.768992   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.269161   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.768782   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.269438   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.769149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.268490   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.768911   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.269363   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.769428   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.268548   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.769489   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:46.046613   57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
	I0925 11:30:46.046655   57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
	I0925 11:30:46.046676   57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.046764   57426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:30:46.048206   57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.048432   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:30:46.048574   57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:30:46.048644   57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048653   57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048678   57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
	I0925 11:30:46.048687   57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
	W0925 11:30:46.048690   57426 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:30:46.048698   57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
	W0925 11:30:46.048709   57426 addons.go:240] addon dashboard should already be in state true
	I0925 11:30:46.048720   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.048735   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048744   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048818   57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048847   57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
	W0925 11:30:46.048855   57426 addons.go:240] addon metrics-server should already be in state true
	I0925 11:30:46.048680   57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
	I0925 11:30:46.048796   57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:30:46.048888   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048935   57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:30:46.048944   57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
	I0925 11:30:46.048955   57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:30:46.048963   57426 cache.go:87] Successfully saved all images to host disk.
	I0925 11:30:46.049135   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.049144   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049162   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049168   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049183   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049247   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049260   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049320   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049333   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049505   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049555   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.072180   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0925 11:30:46.072238   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0925 11:30:46.072269   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0925 11:30:46.072356   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0925 11:30:46.072357   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0925 11:30:46.072696   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072776   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072860   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073344   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073364   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073496   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073756   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073762   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.073964   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074195   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074210   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074253   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074286   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074439   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074467   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074610   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074656   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074686   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074715   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074930   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.075069   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.075101   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.075234   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.075269   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.075582   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.075811   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.077659   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.077697   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.094611   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0925 11:30:46.097022   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0925 11:30:46.097145   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097460   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097748   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.097767   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098172   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.098314   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.098333   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098564   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.098618   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.099229   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.101256   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.103863   57426 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:30:46.102124   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.102436   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0925 11:30:46.106576   57426 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:30:46.105560   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.109500   57426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:30:46.108220   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:30:46.108845   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.110913   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.110969   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:30:46.110985   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.110999   57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.111011   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:30:46.111024   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.112450   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.112637   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.112839   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:30:46.112862   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.115509   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.115949   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.115983   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116123   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116214   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116253   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.116342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.116466   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.116484   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.116508   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116774   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116925   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.117104   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.117252   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.119073   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119413   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.119430   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119685   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.119854   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.120011   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.120148   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.127174   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0925 11:30:46.127843   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.128399   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.128428   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.128967   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.129155   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.129945   57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
	I0925 11:30:46.129977   57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:30:46.131741   57426 out.go:177] * Verifying Kubernetes components...
	I0925 11:30:46.133087   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:46.130848   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.134728   57426 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:30:44.129372   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.133247   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.630362   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.136080   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:30:46.136097   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:30:46.136115   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.139231   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139692   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.139718   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139957   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.140113   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.140252   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.140377   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.147885   57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
	W0925 11:30:46.147907   57426 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:30:46.147934   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.148356   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.148384   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.173474   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0925 11:30:46.174243   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.174879   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.174900   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.176033   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.176694   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.176736   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.196631   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I0925 11:30:46.197107   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.197645   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.197665   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.198067   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.198270   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.200093   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.200354   57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.200371   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:30:46.200390   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.203486   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.203884   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.203998   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.204172   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.204342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.204489   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.204636   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.413931   57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.414008   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:30:46.416569   57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
	I0925 11:30:46.416586   57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.416594   57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:46.420795   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:46.484507   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:30:46.484532   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:30:46.532417   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:30:46.532443   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:30:46.575299   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:30:46.575317   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:30:46.595994   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.596018   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:30:46.652448   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0925 11:30:46.652473   57426 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:30:46.652480   57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
	I0925 11:30:46.652483   57426 cache_images.go:263] failed pushing to: 
	I0925 11:30:46.652504   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.652518   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.652957   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:46.652963   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.652991   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.653007   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.653020   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.653288   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.653304   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.705521   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.707099   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.712115   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:30:46.712134   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:30:46.762833   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.851711   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:30:46.851753   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:30:47.115165   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:30:47.115193   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:30:47.386363   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:30:47.386386   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:30:47.610468   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:30:47.610490   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:30:47.697559   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:30:47.697578   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:30:47.864150   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:30:47.864169   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:30:47.915917   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:47.915945   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:30:48.000793   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
	I0925 11:30:48.000836   57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0925 11:30:48.085411   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:48.190617   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
	I0925 11:30:48.190677   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.190691   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.191035   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.191056   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.191068   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.191078   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.192850   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.192853   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.192876   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.192885   57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
	I0925 11:30:48.465209   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.575177   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
	I0925 11:30:48.575232   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575246   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575181   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
	I0925 11:30:48.575317   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575328   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575540   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575560   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575570   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575579   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575635   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575742   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575772   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575781   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575789   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575797   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575878   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575903   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575911   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577345   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577384   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577406   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577435   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.577451   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.577940   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577944   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577964   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.298546   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
	I0925 11:30:49.298606   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.298628   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302266   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302272   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302307   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.302321   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.302331   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302655   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302695   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302717   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.304441   57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694015 addons enable metrics-server	
	
	
	I0925 11:30:49.306061   57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0925 11:30:49.307539   57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0925 11:30:50.630959   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.128983   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:50.940378   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.436796   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.437380   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.131064   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.628873   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.449840   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.938237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.629445   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.129311   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.937614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.627904   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.629258   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:08.629473   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.937878   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:09.437807   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.128681   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:13.129731   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.939073   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:14.437620   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:15.628774   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:17.630838   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:16.938666   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:19.437732   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:20.139603   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:22.629587   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:21.938151   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:23.938328   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:25.130178   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:27.628803   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:26.439526   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:28.937508   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:29.631037   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:32.128151   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:30.943648   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:33.437428   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:35.438086   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:34.129227   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:36.129294   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:38.629985   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:37.439039   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:39.442448   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.129913   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.631099   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.937237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.939282   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.128833   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.628446   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.438561   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.938598   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.629674   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:53.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.938694   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:52.939141   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.438245   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.629903   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:58.128851   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:57.937434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.437596   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.129216   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.629241   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.437909   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.438109   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.629284   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:07.128455   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:06.438145   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:08.938681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:09.129543   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.629259   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:13.438614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:14.130657   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:16.629579   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:15.938889   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:18.438798   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:19.129812   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:21.630003   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:20.937670   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:22.938056   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.938180   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.128380   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.630164   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.938537   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.938993   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:31.127679   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.128750   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:30.939782   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.438287   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.438564   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.128786   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.129289   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.938062   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:40.438394   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:39.129627   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:41.131250   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:43.629234   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:42.439143   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:44.938221   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:45.630527   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.128292   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:46.940247   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.940644   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:50.128630   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:52.129574   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:51.437686   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:53.438013   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:55.438473   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:54.629843   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.128814   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.939231   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:00.438636   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:59.633169   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.129926   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.937519   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.937631   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.629189   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:06.629835   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:08.629868   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:07.436605   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:09.437297   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.128030   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.128211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.438337   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.939288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:14.611278   59899 pod_ready.go:81] duration metric: took 4m0.000327599s waiting for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:14.611332   59899 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:33:14.611349   59899 pod_ready.go:38] duration metric: took 4m12.007655968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:14.611376   59899 kubeadm.go:640] restartCluster took 4m31.218254898s
	W0925 11:33:14.611443   59899 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:33:14.611477   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 11:33:15.940496   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:18.440278   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:23.826236   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.214737742s)
	I0925 11:33:23.826300   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:23.840564   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:33:23.850760   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:33:23.860161   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:33:23.860203   59899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 11:33:20.938819   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:22.939228   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.940142   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.111104   59899 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:33:27.440968   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:29.937681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:33.957801   59899 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 11:33:33.957861   59899 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:33:33.957964   59899 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:33:33.958127   59899 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:33:33.958257   59899 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:33:33.958352   59899 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:33:33.961247   59899 out.go:204]   - Generating certificates and keys ...
	I0925 11:33:33.961330   59899 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:33:33.961381   59899 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:33:33.961482   59899 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:33:33.961584   59899 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:33:33.961691   59899 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:33:33.961764   59899 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:33:33.961860   59899 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:33:33.961946   59899 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:33:33.962038   59899 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:33:33.962141   59899 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:33:33.962189   59899 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:33:33.962274   59899 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:33:33.962342   59899 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:33:33.962404   59899 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:33:33.962484   59899 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:33:33.962596   59899 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:33:33.962722   59899 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:33:33.962812   59899 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:33:33.964227   59899 out.go:204]   - Booting up control plane ...
	I0925 11:33:33.964334   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:33:33.964411   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:33:33.964484   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:33:33.964622   59899 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:33:33.964767   59899 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:33:33.964843   59899 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 11:33:33.964974   59899 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:33:33.965033   59899 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004093 seconds
	I0925 11:33:33.965122   59899 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:33:33.965219   59899 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:33:33.965300   59899 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:33:33.965551   59899 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-094323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 11:33:33.965631   59899 kubeadm.go:322] [bootstrap-token] Using token: jxl01o.6st4cg36x4e3zwsq
	I0925 11:33:33.968152   59899 out.go:204]   - Configuring RBAC rules ...
	I0925 11:33:33.968255   59899 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:33:33.968324   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 11:33:33.968463   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:33:33.968579   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:33:33.968719   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:33:33.968841   59899 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:33:33.968990   59899 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 11:33:33.969057   59899 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:33:33.969115   59899 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:33:33.969125   59899 kubeadm.go:322] 
	I0925 11:33:33.969197   59899 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:33:33.969206   59899 kubeadm.go:322] 
	I0925 11:33:33.969302   59899 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:33:33.969309   59899 kubeadm.go:322] 
	I0925 11:33:33.969339   59899 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:33:33.969409   59899 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:33:33.969481   59899 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:33:33.969494   59899 kubeadm.go:322] 
	I0925 11:33:33.969577   59899 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 11:33:33.969592   59899 kubeadm.go:322] 
	I0925 11:33:33.969652   59899 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 11:33:33.969661   59899 kubeadm.go:322] 
	I0925 11:33:33.969721   59899 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:33:33.969820   59899 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:33:33.969931   59899 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:33:33.969945   59899 kubeadm.go:322] 
	I0925 11:33:33.970020   59899 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 11:33:33.970079   59899 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:33:33.970085   59899 kubeadm.go:322] 
	I0925 11:33:33.970149   59899 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970246   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:33:33.970273   59899 kubeadm.go:322] 	--control-plane 
	I0925 11:33:33.970286   59899 kubeadm.go:322] 
	I0925 11:33:33.970379   59899 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:33:33.970391   59899 kubeadm.go:322] 
	I0925 11:33:33.970473   59899 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970561   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:33:33.970570   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:33:33.970583   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:33:33.973276   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:33:33.974771   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:33:33.991169   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:33:34.014483   59899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:33:34.014576   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.014605   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=embed-certs-094323 minikube.k8s.io/updated_at=2023_09_25T11_33_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:31.938903   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.438342   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.061656   59899 ops.go:34] apiserver oom_adj: -16
	I0925 11:33:34.486947   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.586316   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.181870   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.682572   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.182427   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.682439   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.182278   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.682264   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.181892   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.681964   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.938434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.437659   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.181618   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:39.682052   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.181879   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.682579   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.182334   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.682270   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.181757   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.682314   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.181975   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.682310   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.438288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:43.937112   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:44.182254   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:44.682566   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.181651   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.681891   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.783591   59899 kubeadm.go:1081] duration metric: took 11.769084129s to wait for elevateKubeSystemPrivileges.
	I0925 11:33:45.783631   59899 kubeadm.go:406] StartCluster complete in 5m2.419220731s
	I0925 11:33:45.783654   59899 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.783749   59899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:33:45.785139   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.785373   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:33:45.785497   59899 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:33:45.785578   59899 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-094323"
	I0925 11:33:45.785591   59899 addons.go:69] Setting default-storageclass=true in profile "embed-certs-094323"
	I0925 11:33:45.785600   59899 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-094323"
	W0925 11:33:45.785608   59899 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:33:45.785610   59899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-094323"
	I0925 11:33:45.785613   59899 addons.go:69] Setting metrics-server=true in profile "embed-certs-094323"
	I0925 11:33:45.785629   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.785624   59899 addons.go:69] Setting dashboard=true in profile "embed-certs-094323"
	I0925 11:33:45.785641   59899 addons.go:231] Setting addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:45.785649   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	W0925 11:33:45.785652   59899 addons.go:240] addon metrics-server should already be in state true
	I0925 11:33:45.785661   59899 addons.go:231] Setting addon dashboard=true in "embed-certs-094323"
	W0925 11:33:45.785671   59899 addons.go:240] addon dashboard should already be in state true
	I0925 11:33:45.785702   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785726   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785720   59899 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:33:45.785794   59899 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:33:45.785804   59899 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 97.126µs
	I0925 11:33:45.785813   59899 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:33:45.785842   59899 cache.go:87] Successfully saved all images to host disk.
	I0925 11:33:45.786040   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.786074   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786077   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786103   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786119   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786100   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786148   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786175   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786226   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786382   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786458   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.804658   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0925 11:33:45.804729   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0925 11:33:45.804829   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0925 11:33:45.805237   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805268   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805835   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.805855   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806126   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0925 11:33:45.806245   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.806461   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.806533   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806584   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806593   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806608   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806726   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0925 11:33:45.806958   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806973   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807052   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.807117   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807146   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807158   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807335   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.807552   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807628   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807655   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807678   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.807709   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808075   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808113   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808146   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.808643   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808695   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.809669   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.809713   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.815794   59899 addons.go:231] Setting addon default-storageclass=true in "embed-certs-094323"
	W0925 11:33:45.815817   59899 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:33:45.815845   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.816191   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.816218   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.818468   59899 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-094323" context rescaled to 1 replicas
	I0925 11:33:45.818498   59899 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:33:45.820484   59899 out.go:177] * Verifying Kubernetes components...
	I0925 11:33:45.821970   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:45.827608   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0925 11:33:45.827764   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0925 11:33:45.828140   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828192   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828742   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828756   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.828865   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828875   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.829243   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829291   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829499   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829508   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829541   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0925 11:33:45.830368   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.830816   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.830834   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.830898   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0925 11:33:45.831336   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.831343   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.831544   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.831741   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:33:45.831767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.831896   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.831910   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.831962   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.832006   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.834683   59899 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:33:45.833215   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.835296   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836115   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.836132   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836140   59899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:33:45.835941   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.837552   59899 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:45.837565   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:33:45.837580   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836081   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:33:45.837626   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:33:45.837640   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.837722   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.838263   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.838449   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.840153   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.841675   59899 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:33:45.843211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0925 11:33:45.841916   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842082   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842734   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.842915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.843565   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.844615   59899 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:33:45.845951   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:33:45.845966   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:33:45.845980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.844700   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.844729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.846027   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844863   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.846043   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844886   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.845165   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.846085   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.846265   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846317   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846412   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.846432   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.847139   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.847153   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.847192   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.848989   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849283   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.849314   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849456   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.849635   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.849777   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.849913   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.862447   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0925 11:33:45.862828   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.863295   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.863325   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.863706   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.863888   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.865511   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.865802   59899 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:45.865821   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:33:45.865840   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.868353   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868774   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.868808   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868936   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.869132   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.869260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.869371   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:46.090766   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:33:46.090794   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:33:46.148251   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:46.244486   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:46.246747   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:33:46.246767   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:33:46.285706   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:33:46.285733   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:33:46.399367   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:33:46.399389   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:33:46.454580   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.454598   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:33:46.478692   59899 node_ready.go:35] waiting up to 6m0s for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.478749   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:33:46.478754   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:33:46.478763   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:33:46.478772   59899 cache_images.go:262] succeeded pushing to: embed-certs-094323
	I0925 11:33:46.478777   59899 cache_images.go:263] failed pushing to: 
	I0925 11:33:46.478797   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.478821   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479120   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479177   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.479190   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.479200   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479138   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479623   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479632   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.495731   59899 node_ready.go:49] node "embed-certs-094323" has status "Ready":"True"
	I0925 11:33:46.495756   59899 node_ready.go:38] duration metric: took 17.032177ms waiting for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.495768   59899 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:46.502666   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:46.590707   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:33:46.590728   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:33:46.646116   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.836729   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:33:46.836758   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:33:47.081956   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:33:47.081978   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:33:47.372971   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:33:47.372999   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:33:47.548990   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:33:47.549016   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:33:47.759403   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:33:47.759425   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:33:48.094571   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.094601   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:33:48.300509   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.523994   59899 pod_ready.go:102] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:49.536334   59899 pod_ready.go:92] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.536354   59899 pod_ready.go:81] duration metric: took 3.03366041s waiting for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.536365   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.539583   59899 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539613   59899 pod_ready.go:81] duration metric: took 3.241249ms waiting for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:49.539624   59899 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539633   59899 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549714   59899 pod_ready.go:92] pod "etcd-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.549731   59899 pod_ready.go:81] duration metric: took 10.090379ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549742   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554903   59899 pod_ready.go:92] pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.554917   59899 pod_ready.go:81] duration metric: took 5.167429ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554927   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564229   59899 pod_ready.go:92] pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.564249   59899 pod_ready.go:81] duration metric: took 9.314363ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564261   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.568126   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.41983793s)
	I0925 11:33:49.568187   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.323661752s)
	I0925 11:33:49.568232   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568239   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.089462417s)
	I0925 11:33:49.568251   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568256   59899 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0925 11:33:49.568301   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568319   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568360   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.922215522s)
	I0925 11:33:49.568392   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568407   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568608   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568626   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568637   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568643   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568674   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568685   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568689   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568695   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568697   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568704   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568646   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568716   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568725   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568959   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.569003   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569015   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569016   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569024   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569031   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.569036   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569045   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.569048   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569033   59899 addons.go:467] Verifying addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:49.569276   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569292   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.883443   59899 pod_ready.go:92] pod "kube-proxy-pjwm2" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.883465   59899 pod_ready.go:81] duration metric: took 319.196098ms waiting for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.883477   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292288   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:50.292314   59899 pod_ready.go:81] duration metric: took 408.829404ms waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292325   59899 pod_ready.go:38] duration metric: took 3.79654573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:50.292349   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:50.292413   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:50.390976   59899 api_server.go:72] duration metric: took 4.572446849s to wait for apiserver process to appear ...
	I0925 11:33:50.390998   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:50.391016   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:33:50.391107   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.090546724s)
	I0925 11:33:50.391160   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391179   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391539   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391540   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391568   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.391584   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391594   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391810   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391822   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391828   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.393750   59899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-094323 addons enable metrics-server	
	
	
	I0925 11:33:50.395438   59899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0925 11:33:45.939462   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:47.439176   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439201   57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.439210   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439218   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.441757   57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441785   57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.441797   57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441806   57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.447728   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447759   57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.447770   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447777   57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:47.447809   57426 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:47.447887   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:47.480326   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:47.480410   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:47.500790   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:47.500883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:47.521967   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:47.522043   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:47.542833   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:47.542921   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:47.564220   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:47.564296   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:47.585142   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:47.585233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:47.604606   57426 logs.go:284] 0 containers: []
	W0925 11:33:47.604638   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:47.604734   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:47.634903   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:47.634987   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:47.659599   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:47.659654   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:47.659677   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:47.713402   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:47.713441   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:47.746308   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:47.746347   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:47.777953   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:47.777991   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:47.933013   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:47.933041   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:47.959588   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:47.959623   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:47.989240   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:47.989285   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:48.069991   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:48.070022   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:48.107511   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.108197   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108438   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108657   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:48.109661   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.109891   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.110800   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.111045   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111291   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111524   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112518   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.112765   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112989   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113221   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113444   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113656   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113877   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.114848   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.115076   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115297   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115517   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115743   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115978   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.116194   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.148933   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.150648   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.152304   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:48.152321   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:48.170706   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:48.170735   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:48.204533   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:48.204574   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:48.242201   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:48.242239   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:48.305874   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:48.305916   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:48.375041   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375074   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:48.375130   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:48.375142   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375161   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375169   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375176   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.375185   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.375190   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375199   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:33:50.396708   59899 addons.go:502] enable addons completed in 4.611221618s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0925 11:33:50.409202   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:33:50.411339   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:33:50.411356   59899 api_server.go:131] duration metric: took 20.35197ms to wait for apiserver health ...
	I0925 11:33:50.411366   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:33:50.490420   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:33:50.490453   59899 system_pods.go:61] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.490461   59899 system_pods.go:61] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.490468   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.490476   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.490483   59899 system_pods.go:61] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.490489   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.490500   59899 system_pods.go:61] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.490515   59899 system_pods.go:61] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.490528   59899 system_pods.go:74] duration metric: took 79.155444ms to wait for pod list to return data ...
	I0925 11:33:50.490540   59899 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:33:50.691794   59899 default_sa.go:45] found service account: "default"
	I0925 11:33:50.691828   59899 default_sa.go:55] duration metric: took 201.27577ms for default service account to be created ...
	I0925 11:33:50.691838   59899 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:33:50.887600   59899 system_pods.go:86] 8 kube-system pods found
	I0925 11:33:50.887636   59899 system_pods.go:89] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.887645   59899 system_pods.go:89] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.887652   59899 system_pods.go:89] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.887662   59899 system_pods.go:89] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.887668   59899 system_pods.go:89] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.887675   59899 system_pods.go:89] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.887683   59899 system_pods.go:89] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.887694   59899 system_pods.go:89] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.887707   59899 system_pods.go:126] duration metric: took 195.862461ms to wait for k8s-apps to be running ...
	I0925 11:33:50.887718   59899 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:33:50.887769   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:50.910382   59899 system_svc.go:56] duration metric: took 22.655864ms WaitForService to wait for kubelet.
	I0925 11:33:50.910410   59899 kubeadm.go:581] duration metric: took 5.091888107s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:33:50.910429   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:33:51.083597   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:33:51.083633   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:33:51.083648   59899 node_conditions.go:105] duration metric: took 173.214402ms to run NodePressure ...
	I0925 11:33:51.083660   59899 start.go:228] waiting for startup goroutines ...
	I0925 11:33:51.083670   59899 start.go:233] waiting for cluster config update ...
	I0925 11:33:51.083682   59899 start.go:242] writing updated cluster config ...
	I0925 11:33:51.084016   59899 ssh_runner.go:195] Run: rm -f paused
	I0925 11:33:51.130189   59899 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:33:51.132357   59899 out.go:177] * Done! kubectl is now configured to use "embed-certs-094323" cluster and "default" namespace by default
	I0925 11:33:58.376816   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:58.397417   57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
	I0925 11:33:58.397443   57426 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:58.397517   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:58.423312   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:58.423385   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:58.443439   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:58.443499   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:58.463360   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:58.463443   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:58.486151   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:58.486228   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:58.507009   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:58.507095   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:58.525571   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:58.525647   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:58.542397   57426 logs.go:284] 0 containers: []
	W0925 11:33:58.542424   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:58.542481   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:58.562186   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:58.562260   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:58.580984   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:58.581014   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:58.581030   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:58.731921   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:58.731958   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:58.759982   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:58.760017   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:58.817088   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:58.817120   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:58.851581   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.852006   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852226   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:58.853080   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.853245   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.853866   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.854027   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854211   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854408   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855047   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.855223   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855403   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855601   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855811   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856008   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856210   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856868   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.857032   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857219   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857418   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857616   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857814   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.858011   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.889357   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:58.891108   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:58.893160   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:58.893178   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:58.927223   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:58.927264   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:58.951343   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:58.951376   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:58.979268   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:58.979303   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:59.010031   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:59.010059   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:59.050333   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:59.050367   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:59.093782   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:59.093820   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:59.118196   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:59.118222   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:59.228267   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:59.228306   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:59.247426   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247459   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:59.247517   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:59.247534   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247545   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247554   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247563   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:59.247574   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:59.247584   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247597   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:09.249955   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:34:09.256612   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0925 11:34:09.257809   57426 api_server.go:141] control plane version: v1.16.0
	I0925 11:34:09.257827   57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
	I0925 11:34:09.257833   57426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:34:09.257883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:34:09.280149   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:34:09.280233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:34:09.300127   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:34:09.300211   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:34:09.332581   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:34:09.332656   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:34:09.352994   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:34:09.353061   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:34:09.374892   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:34:09.374960   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:34:09.395820   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:34:09.395884   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:34:09.414225   57426 logs.go:284] 0 containers: []
	W0925 11:34:09.414245   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:34:09.414284   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:34:09.434336   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:34:09.434398   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:34:09.456185   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:34:09.456218   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:34:09.456231   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:34:09.590378   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:34:09.590409   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:34:09.617599   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:34:09.617624   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:34:09.643431   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:34:09.643459   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:34:09.665103   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:34:09.665129   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:34:09.693931   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:34:09.693963   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:34:09.742784   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:34:09.742812   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:34:09.804145   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:34:09.804177   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:34:09.818586   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:34:09.818609   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:34:09.857846   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:34:09.857875   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:34:09.880799   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:34:09.880828   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:34:09.950547   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:34:09.950572   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:34:09.983084   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.983479   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983617   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983758   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:34:09.984405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.984547   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985367   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.985576   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985713   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985898   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986632   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.986786   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986945   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987132   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987279   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987469   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987663   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988255   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.988398   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988533   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988685   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988822   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988958   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.989093   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.020550   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.022302   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.024541   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:34:10.024558   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:34:10.053454   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053477   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:34:10.053524   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:34:10.053535   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053543   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053551   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053557   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.053563   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.053568   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053573   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:20.061232   57426 system_pods.go:59] 8 kube-system pods found
	I0925 11:34:20.061260   57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.061267   57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.061271   57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.061277   57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.061284   57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.061288   57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.061295   57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.061300   57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.061307   57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
	I0925 11:34:20.061314   57426 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:34:20.064090   57426 default_sa.go:45] found service account: "default"
	I0925 11:34:20.064114   57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
	I0925 11:34:20.064123   57426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:34:20.068614   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.068644   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.068653   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.068674   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.068682   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.068690   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.068696   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.068707   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.068719   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.068739   57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.275900   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.275943   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.275952   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.275960   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.275967   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.275974   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.275982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.275992   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.276001   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.276021   57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.579425   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.579469   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.579480   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.579489   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.579497   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.579506   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.579513   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.579522   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.579531   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.579553   57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.024313   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.024351   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.024360   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.024365   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.024372   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.024381   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.024390   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.024401   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.024411   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.024428   57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.536419   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.536449   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.536460   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.536466   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.536470   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.536476   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.536480   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.536486   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.536492   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.536506   57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.027728   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.027766   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.027776   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.027783   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.027787   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.027796   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.027804   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.027814   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.027822   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.027838   57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.714282   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.714315   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.714326   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.714335   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.714342   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.714349   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.714354   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.714365   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.714381   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.714399   57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
	I0925 11:34:23.438829   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:23.438855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:23.438862   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:23.438867   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:23.438872   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:23.438877   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:23.438882   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:23.438891   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:23.438898   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:23.438912   57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
	I0925 11:34:24.724821   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:24.724855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:24.724864   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:24.724871   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:24.724878   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:24.724887   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:24.724894   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:24.724904   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:24.724919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:24.724942   57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
	I0925 11:34:26.488127   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:26.488156   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:26.488163   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:26.488182   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:26.488203   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:26.488213   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:26.488222   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:26.488232   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:26.488247   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:26.488266   57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
	I0925 11:34:27.921755   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:27.921783   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:27.921790   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:27.921795   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:27.921800   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:27.921805   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:27.921810   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:27.921815   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:27.921821   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:27.921835   57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
	I0925 11:34:29.885748   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:29.885776   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:29.885783   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:29.885789   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:29.885794   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:29.885799   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:29.885803   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:29.885810   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:29.885815   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:29.885830   57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
	I0925 11:34:32.946353   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:32.946383   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:32.946391   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:32.946396   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:32.946401   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:32.946406   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:32.946410   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:32.946416   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:32.946421   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:32.946434   57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
	I0925 11:34:36.713729   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:36.713754   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:36.713761   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:36.713767   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:36.713772   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:36.713777   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:36.713781   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:36.713788   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:36.713793   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:36.713807   57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
	I0925 11:34:41.454464   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:41.454492   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:41.454498   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:41.454503   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:41.454508   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:41.454513   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:41.454518   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:41.454524   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:41.454529   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:41.454542   57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
	I0925 11:34:46.159214   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:46.159255   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:46.159266   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:46.159275   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:46.159282   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:46.159292   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:46.159299   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:46.159314   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:46.159328   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:46.159350   57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
	I0925 11:34:51.672849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:51.672877   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:51.672884   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:51.672889   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:51.672894   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:51.672899   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:51.672905   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:51.672914   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:51.672919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:51.672933   57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
	I0925 11:34:59.936057   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:59.936086   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:59.936094   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:59.936099   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:59.936104   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:59.936109   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:59.936114   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:59.936119   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:59.936125   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:59.936139   57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
	I0925 11:35:09.479385   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:09.479413   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:09.479420   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:09.479428   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:09.479433   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:09.479441   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:09.479446   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:09.479452   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:09.479459   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:09.479471   57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
	I0925 11:35:22.964926   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:22.964955   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:22.964962   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:22.964967   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:22.964972   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:22.964977   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:22.964982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:22.964988   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:22.964993   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:22.965006   57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
	I0925 11:35:37.171988   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:37.172022   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:37.172034   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:37.172041   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:37.172048   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:37.172055   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:37.172061   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:37.172072   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:37.172083   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:37.172101   57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
	I0925 11:35:54.452675   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:54.452702   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:54.452709   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:54.452714   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:54.452719   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:54.452727   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:54.452731   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:54.452738   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:54.452743   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:54.452756   57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
	I0925 11:36:22.755662   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:22.755700   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:22.755710   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:22.755718   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:22.755724   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:22.755732   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:22.755746   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:22.755761   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:22.755771   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:22.755791   57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
	I0925 11:36:58.289849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:58.289887   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:58.289896   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:58.289901   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:58.289910   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:58.289919   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:58.289927   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:58.289939   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:58.289950   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:58.289971   57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
	I0925 11:37:42.356673   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:37:42.356698   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:37:42.356705   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:37:42.356710   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:37:42.356715   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:37:42.356721   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:37:42.356725   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:37:42.356731   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:37:42.356736   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:37:42.356752   57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
	I0925 11:38:30.124408   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:38:30.124436   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:38:30.124443   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:38:30.124449   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:38:30.124454   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:38:30.124459   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:38:30.124464   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:38:30.124470   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:38:30.124475   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:38:30.124490   57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
	I0925 11:39:18.680525   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:39:18.680555   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:39:18.680561   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:39:18.680567   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:39:18.680572   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:39:18.680578   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:39:18.680582   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:39:18.680589   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:39:18.680594   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:39:18.680607   57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
	I0925 11:40:11.783486   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:40:11.783513   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:40:11.783520   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:40:11.783527   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:40:11.783532   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:40:11.783537   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:40:11.783542   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:40:11.783548   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:40:11.783553   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:40:11.786119   57426 out.go:177] 
	W0925 11:40:11.787697   57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W0925 11:40:11.787711   57426 out.go:239] * 
	W0925 11:40:11.788461   57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:40:11.790057   57426 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:40:12 UTC. --
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572406518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572497492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572525871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572544812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618491365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618680379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618696521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618704838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155674989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155883992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156004251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156243152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.045907108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046033975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046090982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046108215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.109068079Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462862941Z" level=info msg="shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462964770Z" level=warning msg="cleaning up after shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462982909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.463078511Z" level=info msg="ignoring event" container=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824501229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824684623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824701374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824713075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                        COMMAND                  CREATED         STATUS                     PORTS     NAMES
	0f9de8bda7fb   kubernetesui/dashboard       "/dashboard --insecu…"   9 minutes ago   Up 9 minutes                         k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	5d3673792ccf   registry.k8s.io/echoserver   "nginx -g 'daemon of…"   9 minutes ago   Exited (1) 9 minutes ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	90dc66317fc1   6e38f40d628d                 "/storage-provisioner"   9 minutes ago   Up 9 minutes                         k8s_storage-provisioner_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	b16fb26ba287   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	4eb82cb0fa23   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	802d2fbd8809   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	6a94e2e5690b   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_metrics-server-74d5856cc6-wbskx_kube-system_5925c507-8225-4b9c-b89e-13346451d090_0
	c4e353aa787b   bf261d157914                 "/coredns -conf /etc…"   9 minutes ago   Up 9 minutes                         k8s_coredns_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	2bccdb65c1cc   c21b0c7400f9                 "/usr/local/bin/kube…"   9 minutes ago   Up 9 minutes                         k8s_kube-proxy_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	2088f3a7c0bc   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	75c3319baa09   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	eb63d31189ed   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Created                              k8s_POD_coredns-5644d7b6d9-rn247_kube-system_f0e633d0-75fb-4406-928a-ec680c4052fa_0
	4b655f8475a9   b2756210eeab                 "etcd --advertise-cl…"   9 minutes ago   Up 9 minutes                         k8s_etcd_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	08dbfa6061b3   301ddc62b80b                 "kube-scheduler --au…"   9 minutes ago   Up 9 minutes                         k8s_kube-scheduler_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	59225a8740b7   06a629a7e51c                 "kube-controller-man…"   9 minutes ago   Up 9 minutes                         k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	34825b8222f1   b305571ca60a                 "kube-apiserver --ad…"   9 minutes ago   Up 9 minutes                         k8s_kube-apiserver_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	5b274efecb4d   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	6e623a69a033   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	961cf08898d9   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	713ec26ea888   k8s.gcr.io/pause:3.1         "/pause"                 9 minutes ago   Up 9 minutes                         k8s_POD_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	time="2023-09-25T11:40:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [c4e353aa787b] <==
	* .:53
	2023-09-25T11:30:47.501Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-25T11:30:47.501Z [INFO] CoreDNS-1.6.2
	2023-09-25T11:30:47.501Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-694015
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=old-k8s-version-694015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:30:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:40:08 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:40:08 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:40:08 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 25 Sep 2023 11:40:08 +0000   Mon, 25 Sep 2023 11:33:47 +0000   KubeletNotReady              PLEG is not healthy: pleg was last seen active 9m22.343926768s ago; threshold is 3m0s
	Addresses:
	  InternalIP:  192.168.50.17
	  Hostname:    old-k8s-version-694015
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 1bd5d978d1e543b686646b2c32f30862
	 System UUID:                1bd5d978-d1e5-43b6-8664-6b2c32f30862
	 Boot ID:                    5678d5b5-5910-4d2d-a245-2b8fc64bd779
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qnqxm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m27s
	  kube-system                etcd-old-k8s-version-694015                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                kube-apiserver-old-k8s-version-694015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                kube-controller-manager-old-k8s-version-694015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                kube-proxy-gsdzk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                kube-scheduler-old-k8s-version-694015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                metrics-server-74d5856cc6-wbskx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-mxvxx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-z674w             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  9m53s (x8 over 9m54s)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s (x8 over 9m54s)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s (x7 over 9m54s)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m25s                  kube-proxy, old-k8s-version-694015  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.807712] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166866] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.627379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep25 11:25] systemd-fstab-generator[508]: Ignoring "noauto" for root device
	[  +0.112649] systemd-fstab-generator[519]: Ignoring "noauto" for root device
	[  +1.250517] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.395221] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.132329] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.148539] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +6.146658] systemd-fstab-generator[1181]: Ignoring "noauto" for root device
	[  +1.531877] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.077793] systemd-fstab-generator[1658]: Ignoring "noauto" for root device
	[  +0.487565] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.199945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.809912] kauditd_printk_skb: 5 callbacks suppressed
	[Sep25 11:26] hrtimer: interrupt took 6685373 ns
	[Sep25 11:30] systemd-fstab-generator[6846]: Ignoring "noauto" for root device
	[Sep25 11:31] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [4b655f8475a9] <==
	* 2023-09-25 11:30:21.297192 I | etcdserver: initial cluster = old-k8s-version-694015=https://192.168.50.17:2380
	2023-09-25 11:30:21.310739 I | etcdserver: starting member a74ab9f845be4a88 in cluster e7a7808069af5882
	2023-09-25 11:30:21.310817 I | raft: a74ab9f845be4a88 became follower at term 0
	2023-09-25 11:30:21.348667 I | raft: newRaft a74ab9f845be4a88 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-09-25 11:30:21.348787 I | raft: a74ab9f845be4a88 became follower at term 1
	2023-09-25 11:30:21.595167 W | auth: simple token is not cryptographically signed
	2023-09-25 11:30:21.604807 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-25 11:30:21.607417 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 11:30:21.608224 I | etcdserver: a74ab9f845be4a88 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-25 11:30:21.609008 I | etcdserver/membership: added member a74ab9f845be4a88 [https://192.168.50.17:2380] to cluster e7a7808069af5882
	2023-09-25 11:30:21.609764 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-25 11:30:21.610013 I | embed: listening for metrics on http://192.168.50.17:2381
	2023-09-25 11:30:22.316022 I | raft: a74ab9f845be4a88 is starting a new election at term 1
	2023-09-25 11:30:22.316075 I | raft: a74ab9f845be4a88 became candidate at term 2
	2023-09-25 11:30:22.316089 I | raft: a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316099 I | raft: a74ab9f845be4a88 became leader at term 2
	2023-09-25 11:30:22.316104 I | raft: raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316356 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-25 11:30:22.318109 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-25 11:30:22.318162 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-25 11:30:22.318191 I | etcdserver: published {Name:old-k8s-version-694015 ClientURLs:[https://192.168.50.17:2379]} to cluster e7a7808069af5882
	2023-09-25 11:30:22.318197 I | embed: ready to serve client requests
	2023-09-25 11:30:22.318821 I | embed: ready to serve client requests
	2023-09-25 11:30:22.319844 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 11:30:22.319991 I | embed: serving client requests on 192.168.50.17:2379
	
	* 
	* ==> kernel <==
	*  11:40:12 up 15 min,  0 users,  load average: 0.27, 0.37, 0.27
	Linux old-k8s-version-694015 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [34825b8222f1] <==
	* I0925 11:31:49.979903       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:31:49.979987       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:31:49.980034       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:31:49.980118       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:33:49.980819       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:33:49.981054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:33:49.981162       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:33:49.981270       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:35:26.965809       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:35:26.965948       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:35:26.966022       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:35:26.966030       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:36:26.966408       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:36:26.966779       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:36:26.966986       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:36:26.967121       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:38:26.967894       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:38:26.968064       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:38:26.968162       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:38:26.968198       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [59225a8740b7] <==
	* I0925 11:33:50.382473       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	W0925 11:33:57.898753       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:34:17.667175       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:34:29.900850       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:34:47.919904       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:35:01.902850       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:35:18.172387       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:35:33.904989       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:35:48.424547       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:36:05.907379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:36:18.676868       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:36:37.909138       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:36:48.932033       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:37:09.911153       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:37:19.184303       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:37:41.913226       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:37:49.436394       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:38:13.915534       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:38:19.688419       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:38:45.924819       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:38:49.940696       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:39:17.927265       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:39:20.192628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:39:49.929359       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:39:50.444391       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [2bccdb65c1cc] <==
	* W0925 11:30:47.128400       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0925 11:30:47.177538       1 node.go:135] Successfully retrieved node IP: 192.168.50.17
	I0925 11:30:47.177648       1 server_others.go:149] Using iptables Proxier.
	I0925 11:30:47.271820       1 server.go:529] Version: v1.16.0
	I0925 11:30:47.304914       1 config.go:313] Starting service config controller
	I0925 11:30:47.305050       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0925 11:30:47.305152       1 config.go:131] Starting endpoints config controller
	I0925 11:30:47.305167       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0925 11:30:47.424722       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0925 11:30:47.424968       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [08dbfa6061b3] <==
	* W0925 11:30:25.965118       1 authentication.go:79] Authentication is disabled
	I0925 11:30:25.965128       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0925 11:30:25.969940       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0925 11:30:26.032268       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:26.032513       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:26.034880       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:26.035163       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:26.035326       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:26.035758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:26.041977       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:26.042199       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:26.042371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:26.043936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:26.044107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.035540       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:27.039764       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:27.039841       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:27.044797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:27.047742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:27.047784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.049796       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:27.051510       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:27.054657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:27.058480       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:27.061633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:40:13 UTC. --
	Sep 25 11:38:08 old-k8s-version-694015 kubelet[6852]: I0925 11:38:08.080055    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m21.857503263s ago; threshold is 3m0s
	Sep 25 11:38:13 old-k8s-version-694015 kubelet[6852]: I0925 11:38:13.080380    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m26.857823167s ago; threshold is 3m0s
	Sep 25 11:38:18 old-k8s-version-694015 kubelet[6852]: I0925 11:38:18.080741    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m31.858155337s ago; threshold is 3m0s
	Sep 25 11:38:23 old-k8s-version-694015 kubelet[6852]: I0925 11:38:23.081649    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m36.859004603s ago; threshold is 3m0s
	Sep 25 11:38:28 old-k8s-version-694015 kubelet[6852]: I0925 11:38:28.082433    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m41.859872366s ago; threshold is 3m0s
	Sep 25 11:38:33 old-k8s-version-694015 kubelet[6852]: I0925 11:38:33.083425    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m46.860872476s ago; threshold is 3m0s
	Sep 25 11:38:38 old-k8s-version-694015 kubelet[6852]: I0925 11:38:38.084178    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m51.86163424s ago; threshold is 3m0s
	Sep 25 11:38:43 old-k8s-version-694015 kubelet[6852]: I0925 11:38:43.085023    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m56.862471059s ago; threshold is 3m0s
	Sep 25 11:38:48 old-k8s-version-694015 kubelet[6852]: I0925 11:38:48.085439    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m1.862884367s ago; threshold is 3m0s
	Sep 25 11:38:53 old-k8s-version-694015 kubelet[6852]: I0925 11:38:53.085770    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m6.863221874s ago; threshold is 3m0s
	Sep 25 11:38:58 old-k8s-version-694015 kubelet[6852]: I0925 11:38:58.086030    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m11.863489755s ago; threshold is 3m0s
	Sep 25 11:39:03 old-k8s-version-694015 kubelet[6852]: I0925 11:39:03.086684    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m16.864149459s ago; threshold is 3m0s
	Sep 25 11:39:08 old-k8s-version-694015 kubelet[6852]: I0925 11:39:08.086940    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m21.864399202s ago; threshold is 3m0s
	Sep 25 11:39:13 old-k8s-version-694015 kubelet[6852]: I0925 11:39:13.087347    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m26.864795058s ago; threshold is 3m0s
	Sep 25 11:39:18 old-k8s-version-694015 kubelet[6852]: I0925 11:39:18.087708    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m31.865164287s ago; threshold is 3m0s
	Sep 25 11:39:23 old-k8s-version-694015 kubelet[6852]: I0925 11:39:23.088620    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m36.866021478s ago; threshold is 3m0s
	Sep 25 11:39:28 old-k8s-version-694015 kubelet[6852]: I0925 11:39:28.089544    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m41.867001241s ago; threshold is 3m0s
	Sep 25 11:39:33 old-k8s-version-694015 kubelet[6852]: I0925 11:39:33.090422    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m46.867863356s ago; threshold is 3m0s
	Sep 25 11:39:38 old-k8s-version-694015 kubelet[6852]: I0925 11:39:38.091175    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m51.868631697s ago; threshold is 3m0s
	Sep 25 11:39:43 old-k8s-version-694015 kubelet[6852]: I0925 11:39:43.091473    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m56.868932531s ago; threshold is 3m0s
	Sep 25 11:39:48 old-k8s-version-694015 kubelet[6852]: I0925 11:39:48.091888    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m1.86934497s ago; threshold is 3m0s
	Sep 25 11:39:53 old-k8s-version-694015 kubelet[6852]: I0925 11:39:53.092820    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m6.870276979s ago; threshold is 3m0s
	Sep 25 11:39:58 old-k8s-version-694015 kubelet[6852]: I0925 11:39:58.093478    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m11.870931398s ago; threshold is 3m0s
	Sep 25 11:40:03 old-k8s-version-694015 kubelet[6852]: I0925 11:40:03.093775    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m16.871233114s ago; threshold is 3m0s
	Sep 25 11:40:08 old-k8s-version-694015 kubelet[6852]: I0925 11:40:08.094530    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m21.871914466s ago; threshold is 3m0s
	
	* 
	* ==> kubernetes-dashboard [0f9de8bda7fb] <==
	* 2023/09/25 11:31:02 Generating JWE encryption key
	2023/09/25 11:31:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/25 11:31:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/25 11:31:03 Initializing JWE encryption key from synchronized object
	2023/09/25 11:31:03 Creating in-cluster Sidecar client
	2023/09/25 11:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:31:03 Serving insecurely on HTTP port: 9090
	2023/09/25 11:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:32:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:32:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:33:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:33:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:34:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:34:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:35:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:35:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:36:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:36:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:37:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:37:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:38:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:38:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:39:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:39:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:40:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [90dc66317fc1] <==
	* I0925 11:30:51.322039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 11:30:51.347548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 11:30:51.348062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 11:30:51.364910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 11:30:51.365497       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	I0925 11:30:51.368701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82068dcb-41ed-493c-a127-6ea04652eda5", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa became leader
	I0925 11:30:51.466721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-694015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1 (65.218763ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-qnqxm" not found
	Error from server (NotFound): pods "kube-proxy-gsdzk" not found
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wbskx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-d6b4b5544-mxvxx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-z674w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (933.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-z674w" [5d234114-a13f-403f-98e0-7b5fbf830fdd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0925 11:40:25.175629   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:40:27.126558   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:40:30.350135   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:40:34.205374   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:40:57.911944   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:41:19.413909   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:41:27.536033   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:41:46.065205   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 11:41:50.173943   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:41:53.395617   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:42:20.955419   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:42:33.375237   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:42:45.262926   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:42:47.581881   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:42:47.790479   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:42:50.587091   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:43:16.447975   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:43:22.682954   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:44:08.308108   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:44:10.835507   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:44:11.161818   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:44:31.880508   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/no-preload-863905/client.crt: no such file or directory
E0925 11:44:38.698924   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/default-k8s-diff-port-319133/client.crt: no such file or directory
E0925 11:45:25.175559   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:45:27.125844   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:45:30.349773   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:45:57.911139   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:46:19.413899   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:46:19.493170   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:46:27.536138   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:46:46.064802   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 11:47:33.375633   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:47:42.464755   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:47:45.263352   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:47:47.582543   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:47:47.790432   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:48:16.448831   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:48:22.682566   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:49:11.161904   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-25 11:49:14.027064255 +0000 UTC m=+4539.194420117
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context old-k8s-version-694015 describe po kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) kubectl --context old-k8s-version-694015 describe po kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard:
Name:             kubernetes-dashboard-84b68f675b-z674w
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-694015/192.168.50.17
Start Time:       Mon, 25 Sep 2023 11:30:50 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=84b68f675b
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/kubernetes-dashboard-84b68f675b
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-rpvvp (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kubernetes-dashboard-token-rpvvp:
Type:        Secret (a volume populated by a Secret)
SecretName:  kubernetes-dashboard-token-rpvvp
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  18m   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-84b68f675b-z674w to old-k8s-version-694015
Normal  Pulling    18m   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal  Pulled     18m   kubelet            Successfully pulled image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal  Created    18m   kubelet            Created container kubernetes-dashboard
Normal  Started    18m   kubelet            Started container kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: exit status 1 (77.606478ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-84b68f675b-z674w" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
start_stop_delete_test.go:274: kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25: (1.044834988s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-785493 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | disable-driver-mounts-785493                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-094323            | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-094323                 | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-863905 sudo                              | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-094323 sudo                             | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 11:28:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:28:19.035134   59899 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:28:19.035380   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035388   59899 out.go:309] Setting ErrFile to fd 2...
	I0925 11:28:19.035392   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035594   59899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 11:28:19.036084   59899 out.go:303] Setting JSON to false
	I0925 11:28:19.037024   59899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4250,"bootTime":1695637049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:28:19.037076   59899 start.go:138] virtualization: kvm guest
	I0925 11:28:19.039385   59899 out.go:177] * [embed-certs-094323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:28:19.041106   59899 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:28:19.041220   59899 notify.go:220] Checking for updates...
	I0925 11:28:19.042531   59899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:28:19.043924   59899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:19.045264   59899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 11:28:19.046665   59899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:28:19.047943   59899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:28:19.049713   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:19.050284   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.050336   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.066768   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0925 11:28:19.067166   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.067840   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.067866   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.068328   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.068548   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.069227   59899 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:28:19.070747   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.070796   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.084889   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0925 11:28:19.085259   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.085647   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.085666   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.085966   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.086156   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.120695   59899 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 11:28:19.122195   59899 start.go:298] selected driver: kvm2
	I0925 11:28:19.122213   59899 start.go:902] validating driver "kvm2" against &{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.122331   59899 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:28:19.122990   59899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.123070   59899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 11:28:19.137559   59899 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0925 11:28:19.137967   59899 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:28:19.138031   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:19.138049   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:19.138061   59899 start_flags.go:321] config:
	{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.138243   59899 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.139914   59899 out.go:177] * Starting control plane node embed-certs-094323 in cluster embed-certs-094323
	I0925 11:28:19.141213   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:19.141251   59899 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 11:28:19.141267   59899 cache.go:57] Caching tarball of preloaded images
	I0925 11:28:19.141342   59899 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 11:28:19.141351   59899 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 11:28:19.141434   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:19.141593   59899 start.go:365] acquiring machines lock for embed-certs-094323: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:28:19.141630   59899 start.go:369] acquired machines lock for "embed-certs-094323" in 22.488µs
	I0925 11:28:19.141643   59899 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:28:19.141651   59899 fix.go:54] fixHost starting: 
	I0925 11:28:19.141918   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.141948   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.155211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0925 11:28:19.155620   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.156032   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.156055   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.156384   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.156590   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.156767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:28:19.158188   59899 fix.go:102] recreateIfNeeded on embed-certs-094323: state=Stopped err=<nil>
	I0925 11:28:19.158223   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	W0925 11:28:19.158395   59899 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:28:19.160159   59899 out.go:177] * Restarting existing kvm2 VM for "embed-certs-094323" ...
	I0925 11:28:15.403806   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:17.404448   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.405067   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:15.674829   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:18.175095   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.492932   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.991315   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.161340   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Start
	I0925 11:28:19.161501   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring networks are active...
	I0925 11:28:19.162257   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network default is active
	I0925 11:28:19.162588   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network mk-embed-certs-094323 is active
	I0925 11:28:19.163048   59899 main.go:141] libmachine: (embed-certs-094323) Getting domain xml...
	I0925 11:28:19.163763   59899 main.go:141] libmachine: (embed-certs-094323) Creating domain...
	I0925 11:28:20.442361   59899 main.go:141] libmachine: (embed-certs-094323) Waiting to get IP...
	I0925 11:28:20.443271   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.443734   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.443823   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.443734   59935 retry.go:31] will retry after 267.692283ms: waiting for machine to come up
	I0925 11:28:20.713388   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.713952   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.713983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.713901   59935 retry.go:31] will retry after 277.980932ms: waiting for machine to come up
	I0925 11:28:20.993556   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.994198   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.994234   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.994172   59935 retry.go:31] will retry after 459.010271ms: waiting for machine to come up
	I0925 11:28:21.454879   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.455430   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.455461   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.455383   59935 retry.go:31] will retry after 366.809435ms: waiting for machine to come up
	I0925 11:28:21.824207   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.824773   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.824806   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.824720   59935 retry.go:31] will retry after 488.071541ms: waiting for machine to come up
	I0925 11:28:22.314305   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:22.314790   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:22.314818   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:22.314762   59935 retry.go:31] will retry after 945.003407ms: waiting for machine to come up
	I0925 11:28:23.261899   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:23.262367   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:23.262409   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:23.262317   59935 retry.go:31] will retry after 1.092936458s: waiting for machine to come up
	I0925 11:28:21.407022   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:23.905338   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.674171   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.674573   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:25.174611   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.991430   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.491751   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.357394   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:24.358014   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:24.358072   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:24.357975   59935 retry.go:31] will retry after 1.364274695s: waiting for machine to come up
	I0925 11:28:25.723341   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:25.723819   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:25.723848   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:25.723762   59935 retry.go:31] will retry after 1.588423993s: waiting for machine to come up
	I0925 11:28:27.313769   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:27.314265   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:27.314299   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:27.314211   59935 retry.go:31] will retry after 1.537433598s: waiting for machine to come up
	I0925 11:28:28.853890   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:28.854449   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:28.854472   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:28.854378   59935 retry.go:31] will retry after 2.010519573s: waiting for machine to come up
	I0925 11:28:26.405198   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:28.409892   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.673983   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.675459   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.492466   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:31.493901   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:30.867498   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:30.868057   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:30.868084   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:30.868021   59935 retry.go:31] will retry after 2.230830763s: waiting for machine to come up
	I0925 11:28:33.100983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:33.101572   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:33.101612   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:33.101515   59935 retry.go:31] will retry after 4.360204715s: waiting for machine to come up
	I0925 11:28:30.903969   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.905907   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.173159   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:34.672934   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:33.990422   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:35.990706   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.992428   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.463184   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463720   59899 main.go:141] libmachine: (embed-certs-094323) Found IP for machine: 192.168.39.111
	I0925 11:28:37.463748   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has current primary IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463757   59899 main.go:141] libmachine: (embed-certs-094323) Reserving static IP address...
	I0925 11:28:37.464174   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.464215   59899 main.go:141] libmachine: (embed-certs-094323) DBG | skip adding static IP to network mk-embed-certs-094323 - found existing host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"}
	I0925 11:28:37.464230   59899 main.go:141] libmachine: (embed-certs-094323) Reserved static IP address: 192.168.39.111
	I0925 11:28:37.464248   59899 main.go:141] libmachine: (embed-certs-094323) Waiting for SSH to be available...
	I0925 11:28:37.464264   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Getting to WaitForSSH function...
	I0925 11:28:37.466402   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.466816   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.466843   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.467015   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH client type: external
	I0925 11:28:37.467053   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa (-rw-------)
	I0925 11:28:37.467087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0925 11:28:37.467100   59899 main.go:141] libmachine: (embed-certs-094323) DBG | About to run SSH command:
	I0925 11:28:37.467136   59899 main.go:141] libmachine: (embed-certs-094323) DBG | exit 0
	I0925 11:28:37.556399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | SSH cmd err, output: <nil>: 
	I0925 11:28:37.556778   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetConfigRaw
	I0925 11:28:37.557414   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.560030   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560395   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.560428   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560640   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:37.560845   59899 machine.go:88] provisioning docker machine ...
	I0925 11:28:37.560864   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:37.561073   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561221   59899 buildroot.go:166] provisioning hostname "embed-certs-094323"
	I0925 11:28:37.561235   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561420   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.563597   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.563895   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.563925   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.564030   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.564225   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564405   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564531   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.564705   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.565158   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.565180   59899 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-094323 && echo "embed-certs-094323" | sudo tee /etc/hostname
	I0925 11:28:37.695364   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-094323
	
	I0925 11:28:37.695398   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.698664   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699091   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.699124   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699344   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.699550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699717   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699901   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.700108   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.700483   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.700503   59899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-094323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-094323/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-094323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:28:37.824658   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:28:37.824711   59899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
	I0925 11:28:37.824734   59899 buildroot.go:174] setting up certificates
	I0925 11:28:37.824745   59899 provision.go:83] configureAuth start
	I0925 11:28:37.824759   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.825074   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.827695   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.828131   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828262   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.830526   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830866   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.830897   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830986   59899 provision.go:138] copyHostCerts
	I0925 11:28:37.831038   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
	I0925 11:28:37.831050   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
	I0925 11:28:37.831116   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
	I0925 11:28:37.831199   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
	I0925 11:28:37.831208   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
	I0925 11:28:37.831231   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
	I0925 11:28:37.831315   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
	I0925 11:28:37.831322   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
	I0925 11:28:37.831343   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
	I0925 11:28:37.831388   59899 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.embed-certs-094323 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube embed-certs-094323]
	I0925 11:28:37.908612   59899 provision.go:172] copyRemoteCerts
	I0925 11:28:37.908700   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:28:37.908735   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.911729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912109   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.912140   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912334   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.912534   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.912716   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.912845   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:37.998547   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:28:38.026509   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0925 11:28:38.050201   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:28:38.074649   59899 provision.go:86] duration metric: configureAuth took 249.890915ms
	I0925 11:28:38.074676   59899 buildroot.go:189] setting minikube options for container-runtime
	I0925 11:28:38.074944   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:38.074975   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:38.075242   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.078170   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078528   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.078567   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078795   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.078989   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079174   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079356   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.079539   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.079964   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.079984   59899 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 11:28:38.198741   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 11:28:38.198765   59899 buildroot.go:70] root file system type: tmpfs
	I0925 11:28:38.198890   59899 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 11:28:38.198915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.201807   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202182   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.202213   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.202547   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202711   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202847   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.202992   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.203346   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.203422   59899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 11:28:38.330031   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 11:28:38.330061   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.333195   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333537   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.333568   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333754   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.333924   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334109   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334259   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.334428   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.334869   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.334898   59899 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 11:28:35.403941   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.405325   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:36.673537   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:38.675023   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:39.250696   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 11:28:39.250732   59899 machine.go:91] provisioned docker machine in 1.689868908s
	I0925 11:28:39.250752   59899 start.go:300] post-start starting for "embed-certs-094323" (driver="kvm2")
	I0925 11:28:39.250766   59899 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:28:39.250786   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.251224   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:28:39.251260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.254399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.254904   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.254937   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.255093   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.255261   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.255432   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.255612   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.350663   59899 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:28:39.357361   59899 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 11:28:39.357388   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
	I0925 11:28:39.357464   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
	I0925 11:28:39.357582   59899 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
	I0925 11:28:39.357712   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:28:39.374752   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:39.407365   59899 start.go:303] post-start completed in 156.599445ms
	I0925 11:28:39.407390   59899 fix.go:56] fixHost completed within 20.265737349s
	I0925 11:28:39.407412   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.409869   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410204   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.410246   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.410526   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410672   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410817   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.411004   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:39.411443   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:39.411457   59899 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 11:28:39.525878   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641319.473578694
	
	I0925 11:28:39.525906   59899 fix.go:206] guest clock: 1695641319.473578694
	I0925 11:28:39.525916   59899 fix.go:219] Guest: 2023-09-25 11:28:39.473578694 +0000 UTC Remote: 2023-09-25 11:28:39.407394176 +0000 UTC m=+20.400726255 (delta=66.184518ms)
	I0925 11:28:39.525941   59899 fix.go:190] guest clock delta is within tolerance: 66.184518ms
	I0925 11:28:39.525949   59899 start.go:83] releasing machines lock for "embed-certs-094323", held for 20.384309776s
	I0925 11:28:39.525980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.526255   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:39.528977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.529375   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529553   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530157   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530430   59899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:28:39.530480   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.530741   59899 ssh_runner.go:195] Run: cat /version.json
	I0925 11:28:39.530766   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.533347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533598   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533796   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.533834   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534008   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.534017   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534033   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534116   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534397   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534497   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534546   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534701   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.534716   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.619280   59899 ssh_runner.go:195] Run: systemctl --version
	I0925 11:28:39.651081   59899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 11:28:39.656908   59899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 11:28:39.656977   59899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:28:39.674233   59899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:28:39.674259   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.674415   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:39.693891   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 11:28:39.704196   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 11:28:39.714537   59899 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 11:28:39.714587   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 11:28:39.724833   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.734476   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 11:28:39.744763   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.755865   59899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 11:28:39.765565   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 11:28:39.775652   59899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 11:28:39.785628   59899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 11:28:39.794828   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:39.915710   59899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 11:28:39.933084   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.933164   59899 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 11:28:39.949304   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.963709   59899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:28:39.980784   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.994887   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.007408   59899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 11:28:40.034805   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.047786   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:40.066171   59899 ssh_runner.go:195] Run: which cri-dockerd
	I0925 11:28:40.070494   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 11:28:40.078000   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 11:28:40.093462   59899 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 11:28:40.197902   59899 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 11:28:40.313798   59899 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 11:28:40.313947   59899 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 11:28:40.330472   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:40.443989   59899 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:28:41.943902   59899 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49987353s)
	I0925 11:28:41.943995   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.063894   59899 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 11:28:42.177577   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.291042   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.407796   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 11:28:42.429673   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.553611   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 11:28:42.637258   59899 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 11:28:42.637336   59899 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 11:28:42.643315   59899 start.go:537] Will wait 60s for crictl version
	I0925 11:28:42.643380   59899 ssh_runner.go:195] Run: which crictl
	I0925 11:28:42.647521   59899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 11:28:42.709061   59899 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 11:28:42.709123   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:42.735005   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:39.992653   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.493405   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.763193   59899 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 11:28:42.763239   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:42.766116   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766453   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:42.766487   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766740   59899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0925 11:28:42.770645   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.782793   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:42.782837   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.805110   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.805135   59899 docker.go:594] Images already preloaded, skipping extraction
	I0925 11:28:42.805190   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.824840   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.824876   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:28:42.824941   59899 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 11:28:42.858255   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:42.858285   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:42.858303   59899 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 11:28:42.858319   59899 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-094323 NodeName:embed-certs-094323 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 11:28:42.858443   59899 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-094323"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 11:28:42.858508   59899 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-094323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 11:28:42.858563   59899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 11:28:42.868791   59899 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 11:28:42.868861   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 11:28:42.878094   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0925 11:28:42.894185   59899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 11:28:42.910390   59899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0925 11:28:42.929194   59899 ssh_runner.go:195] Run: grep 192.168.39.111	control-plane.minikube.internal$ /etc/hosts
	I0925 11:28:42.933290   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.946061   59899 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323 for IP: 192.168.39.111
	I0925 11:28:42.946095   59899 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:42.946253   59899 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
	I0925 11:28:42.946292   59899 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
	I0925 11:28:42.946354   59899 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/client.key
	I0925 11:28:42.946414   59899 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key.f4aa454f
	I0925 11:28:42.946448   59899 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key
	I0925 11:28:42.946581   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
	W0925 11:28:42.946628   59899 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
	I0925 11:28:42.946648   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 11:28:42.946675   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
	I0925 11:28:42.946706   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
	I0925 11:28:42.946743   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
	I0925 11:28:42.946793   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:42.947417   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 11:28:42.970517   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 11:28:42.995598   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 11:28:43.019025   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 11:28:43.044246   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 11:28:43.068806   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 11:28:43.093317   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 11:28:43.117196   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 11:28:43.140309   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
	I0925 11:28:43.164129   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
	I0925 11:28:43.187747   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 11:28:43.211759   59899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 11:28:43.229751   59899 ssh_runner.go:195] Run: openssl version
	I0925 11:28:43.235370   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
	I0925 11:28:43.244462   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249084   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249131   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.254522   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
	I0925 11:28:43.263996   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
	I0925 11:28:43.273424   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278155   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278194   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.283762   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 11:28:43.293817   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 11:28:43.303828   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309173   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309215   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.315555   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 11:28:43.325092   59899 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 11:28:43.329555   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 11:28:43.335420   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 11:28:43.341663   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 11:28:43.347218   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 11:28:43.352934   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 11:28:43.359116   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 11:28:43.364415   59899 kubeadm.go:404] StartCluster: {Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:43.364539   59899 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:43.383931   59899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 11:28:43.393096   59899 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 11:28:43.393114   59899 kubeadm.go:636] restartCluster start
	I0925 11:28:43.393149   59899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 11:28:43.402414   59899 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.403165   59899 kubeconfig.go:135] verify returned: extract IP: "embed-certs-094323" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:43.403590   59899 kubeconfig.go:146] "embed-certs-094323" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
	I0925 11:28:43.404176   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:43.405944   59899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 11:28:43.413960   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.414004   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.424035   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.424049   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.424076   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.435299   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.935935   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.936031   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.947516   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:39.905311   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.908598   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.404783   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.172736   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:43.174138   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:45.174205   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.990934   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:46.991805   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.435537   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.435624   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.447609   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:44.936220   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.936386   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.948140   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.435733   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.435829   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.448013   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.935443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.935535   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.947333   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.435451   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.435515   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.447174   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.935705   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.935782   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.947562   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.436134   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.436202   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.447762   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.936080   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.936141   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.947832   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.435362   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.435430   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.446887   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.935379   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.935477   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.948793   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.904475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:48.905486   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:47.176223   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.674353   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.491562   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:51.492069   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.492471   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.436282   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.436396   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.447719   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:49.936050   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.936137   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.948346   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.435443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.435524   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.446725   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.936401   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.936479   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.948716   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.436316   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.436391   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.447984   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.936106   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.936183   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.951846   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.435363   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.435459   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.447499   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.936093   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.936170   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.948743   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:53.414466   59899 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0925 11:28:53.414503   59899 kubeadm.go:1128] stopping kube-system containers ...
	I0925 11:28:53.414561   59899 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:53.436706   59899 docker.go:463] Stopping containers: [5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3]
	I0925 11:28:53.436785   59899 ssh_runner.go:195] Run: docker stop 5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3
	I0925 11:28:53.460993   59899 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 11:28:53.476266   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:28:53.485682   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:28:53.485753   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495238   59899 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495259   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:53.625292   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:51.404218   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.404644   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:52.173594   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.173762   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:55.992677   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.491954   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.299318   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.496012   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.595147   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.679425   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:28:54.679506   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:54.698114   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.211538   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.711672   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.211025   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.711636   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.734459   59899 api_server.go:72] duration metric: took 2.055031465s to wait for apiserver process to appear ...
	I0925 11:28:56.734482   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:28:56.734499   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735092   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:56.735125   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735727   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:57.236460   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:55.405884   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:57.904799   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:56.673626   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.673704   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.709537   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.709569   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.709581   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.795585   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.795613   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.795624   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.911357   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:00.911393   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.236809   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.242260   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.242286   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.735856   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.743534   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.743563   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:02.236812   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:02.247395   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:29:02.257253   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:29:02.257277   59899 api_server.go:131] duration metric: took 5.522789199s to wait for apiserver health ...
	I0925 11:29:02.257286   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:29:02.257297   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:02.258988   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:29:00.496638   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.992616   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.260493   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:29:02.275303   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:29:02.297272   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:29:02.308818   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:29:02.308855   59899 system_pods.go:61] "coredns-5dd5756b68-7kfz5" [9225f684-4ad2-462b-a20b-13dd27aad56f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:29:02.308868   59899 system_pods.go:61] "etcd-embed-certs-094323" [5603d9a0-390a-4cf1-ad8f-a976016d96e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0925 11:29:02.308879   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [eb928fb0-77a3-45c5-81ce-03ffcb288548] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0925 11:29:02.308889   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8ee4e42e-367a-4be8-9787-c6eb13913d8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0925 11:29:02.308900   59899 system_pods.go:61] "kube-proxy-5k6vp" [b5a3fb6d-bc10-4cde-a1f1-8c57a1fa480b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:29:02.308911   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [4e15edd2-b5f1-4441-b940-2055f20354d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0925 11:29:02.308926   59899 system_pods.go:61] "metrics-server-57f55c9bc5-xcns4" [32a1d71d-7f4d-466a-b745-d2fdf6a88570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:29:02.308942   59899 system_pods.go:61] "storage-provisioner" [91ac60cc-4154-4e62-aa3e-6c492764d7f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:29:02.308955   59899 system_pods.go:74] duration metric: took 11.663759ms to wait for pod list to return data ...
	I0925 11:29:02.308969   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:29:02.315279   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:29:02.315316   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:29:02.315329   59899 node_conditions.go:105] duration metric: took 6.35463ms to run NodePressure ...
	I0925 11:29:02.315351   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:29:02.598238   59899 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603645   59899 kubeadm.go:787] kubelet initialised
	I0925 11:29:02.603673   59899 kubeadm.go:788] duration metric: took 5.409805ms waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603682   59899 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:02.609652   59899 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.616919   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616945   59899 pod_ready.go:81] duration metric: took 7.267055ms waiting for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.616957   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616966   59899 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.626927   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626952   59899 pod_ready.go:81] duration metric: took 9.977984ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.626964   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626975   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.635040   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635057   59899 pod_ready.go:81] duration metric: took 8.069751ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.635065   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635071   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.701570   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701594   59899 pod_ready.go:81] duration metric: took 66.51566ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.701604   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701614   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:00.404282   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.407062   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.674496   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.676016   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.677117   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:05.005683   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.491820   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.513619   59899 pod_ready.go:92] pod "kube-proxy-5k6vp" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:04.513641   59899 pod_ready.go:81] duration metric: took 1.812019136s waiting for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:04.513650   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:06.610704   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:08.610973   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.905976   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.404291   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.408011   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.173790   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.673547   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.492854   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.991906   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.110562   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:13.112908   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.905538   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.404450   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:12.173257   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.673817   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.492243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.991655   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.610905   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:14.610923   59899 pod_ready.go:81] duration metric: took 10.097268131s waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:14.610932   59899 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:16.629749   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.412718   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.906798   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:17.173554   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.674607   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.992367   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.491588   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.130001   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.629643   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.403543   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.405654   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:22.173742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.674422   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.992075   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.491409   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.492221   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.129530   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.629049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.629817   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:25.909201   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.403475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:27.174742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:29.673522   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.990733   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.492080   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.128865   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.129900   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.405115   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:32.904179   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.674133   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.173962   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.990697   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.991964   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.629757   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.630073   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.905517   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.405590   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:36.175249   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:38.674512   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.490747   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.991730   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.129932   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.628523   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:39.904204   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.905925   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.406994   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.172242   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:43.173423   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:45.174163   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.992082   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.491243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.629935   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.129139   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:46.904285   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.409716   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.174974   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.673662   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.993800   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.491813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.130049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.628211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.629350   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.905344   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.905370   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.173811   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.673161   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.493022   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:56.993331   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.629518   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:57.629571   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.909272   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.403659   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.407567   57752 pod_ready.go:81] duration metric: took 4m0.000815308s waiting for pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:58.407592   57752 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:29:58.407601   57752 pod_ready.go:38] duration metric: took 4m6.831828709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:58.407622   57752 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:29:58.407686   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:29:58.442532   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:29:58.442627   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:29:58.466398   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:29:58.466474   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:29:58.488629   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:29:58.488710   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:29:58.515985   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:29:58.516069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:29:58.551483   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:29:58.551593   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:29:58.575447   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:29:58.575518   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:29:58.595332   57752 logs.go:284] 0 containers: []
	W0925 11:29:58.595354   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:29:58.595406   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:29:58.616993   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:29:58.617053   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:29:58.641655   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:29:58.641682   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:29:58.641692   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:29:58.697709   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:29:58.697746   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:29:58.720902   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:29:58.720930   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:29:58.812571   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:29:58.812609   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:29:58.833650   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:29:58.833678   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:29:58.888959   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:29:58.888999   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:29:58.924906   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:29:58.924934   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:29:58.951722   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:29:58.951750   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:29:58.975890   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:29:58.975912   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:29:59.042048   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:29:59.042077   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:29:59.090056   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:29:59.090083   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:29:59.118231   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:29:59.118257   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:29:59.141561   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:29:59.141584   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:29:59.168388   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:29:59.168420   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:29:59.202331   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:29:59.202355   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:29:59.278282   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:29:59.278317   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:29:59.431326   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:29:59.431356   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:29:59.462487   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:29:59.462516   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:29:59.506895   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:29:59.506927   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:29:59.551311   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:29:59.551351   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:29:56.674157   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.174193   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.490645   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:01.491108   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:03.491826   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:00.130429   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.630390   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.085037   57752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:02.106600   57752 api_server.go:72] duration metric: took 4m14.069395428s to wait for apiserver process to appear ...
	I0925 11:30:02.106631   57752 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:02.106709   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:02.131534   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:02.131610   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:02.154915   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:02.154979   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:02.178047   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:02.178108   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:02.202658   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:02.202754   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:02.224819   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:02.224908   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:02.246587   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:02.246650   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:02.267013   57752 logs.go:284] 0 containers: []
	W0925 11:30:02.267037   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:02.267090   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:02.286403   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:02.286476   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:02.307111   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:02.307141   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:02.307154   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:02.347993   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:02.348022   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:02.370841   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:02.370875   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:02.396931   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:02.396954   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:02.438996   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:02.439025   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:02.464589   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:02.464621   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:02.492060   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:02.492087   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:02.558928   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:02.558959   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:02.654217   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:02.654246   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:02.669423   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:02.669453   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:02.802934   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:02.802959   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:02.835624   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:02.835649   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:02.866826   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:02.866849   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:02.898744   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:02.898775   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:02.934534   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:02.934567   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:02.972310   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:02.972337   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:03.005474   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:03.005499   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:03.027346   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:03.027368   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:03.099823   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:03.099857   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:03.124682   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:03.124717   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:01.674624   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:04.179180   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.991507   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:08.492917   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.129924   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:07.630929   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.663871   57752 api_server.go:253] Checking apiserver healthz at https://192.168.72.162:8443/healthz ...
	I0925 11:30:05.669416   57752 api_server.go:279] https://192.168.72.162:8443/healthz returned 200:
	ok
	I0925 11:30:05.670783   57752 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:05.670809   57752 api_server.go:131] duration metric: took 3.564170226s to wait for apiserver health ...
	I0925 11:30:05.670819   57752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:05.670872   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:05.693324   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:05.693399   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:05.717998   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:05.718069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:05.742708   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:05.742793   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:05.764298   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:05.764374   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:05.785970   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:05.786039   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:05.806950   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:05.807037   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:05.826462   57752 logs.go:284] 0 containers: []
	W0925 11:30:05.826487   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:05.826540   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:05.845927   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:05.845997   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:05.868573   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:05.868615   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:05.868629   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:05.909242   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:05.909270   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:05.959647   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:05.959680   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:05.988448   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:05.988480   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:06.067394   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:06.067429   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:06.084943   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:06.084971   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:06.238324   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:06.238357   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:06.273373   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:06.273403   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:06.303181   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:06.303211   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:06.365354   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:06.365398   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:06.391962   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:06.391989   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:06.415389   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:06.415412   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:06.441786   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:06.441809   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:06.479862   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:06.479892   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:06.507143   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:06.507186   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:06.546486   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:06.546514   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:06.591229   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:06.591258   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:06.616844   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:06.616869   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:06.705576   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:06.705606   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:06.742505   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:06.742533   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:09.274341   57752 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:09.274368   57752 system_pods.go:61] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.274373   57752 system_pods.go:61] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.274378   57752 system_pods.go:61] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.274383   57752 system_pods.go:61] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.274386   57752 system_pods.go:61] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.274390   57752 system_pods.go:61] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.274397   57752 system_pods.go:61] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.274402   57752 system_pods.go:61] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.274408   57752 system_pods.go:74] duration metric: took 3.603584218s to wait for pod list to return data ...
	I0925 11:30:09.274414   57752 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:09.276929   57752 default_sa.go:45] found service account: "default"
	I0925 11:30:09.276948   57752 default_sa.go:55] duration metric: took 2.5282ms for default service account to be created ...
	I0925 11:30:09.276954   57752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:09.282656   57752 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:09.282684   57752 system_pods.go:89] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.282690   57752 system_pods.go:89] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.282694   57752 system_pods.go:89] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.282699   57752 system_pods.go:89] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.282702   57752 system_pods.go:89] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.282706   57752 system_pods.go:89] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.282712   57752 system_pods.go:89] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.282721   57752 system_pods.go:89] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.282728   57752 system_pods.go:126] duration metric: took 5.769715ms to wait for k8s-apps to be running ...
	I0925 11:30:09.282734   57752 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:09.282774   57752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:09.296447   57752 system_svc.go:56] duration metric: took 13.70254ms WaitForService to wait for kubelet.
	I0925 11:30:09.296472   57752 kubeadm.go:581] duration metric: took 4m21.259281902s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:09.296496   57752 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:09.300312   57752 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:09.300337   57752 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:09.300350   57752 node_conditions.go:105] duration metric: took 3.848191ms to run NodePressure ...
	I0925 11:30:09.300362   57752 start.go:228] waiting for startup goroutines ...
	I0925 11:30:09.300371   57752 start.go:233] waiting for cluster config update ...
	I0925 11:30:09.300384   57752 start.go:242] writing updated cluster config ...
	I0925 11:30:09.300719   57752 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:09.350285   57752 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:09.353257   57752 out.go:177] * Done! kubectl is now configured to use "no-preload-863905" cluster and "default" namespace by default
	I0925 11:30:06.676262   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.174330   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:10.992813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.490354   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.636520   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:12.129471   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:11.175516   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.673816   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.366919   57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:14.366953   57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:14.366991   57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:14.367015   57426 kubeadm.go:640] restartCluster took 5m2.405916758s
	W0925 11:30:14.367083   57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:30:14.367112   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0925 11:30:15.494599   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.993167   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.130508   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:16.132437   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:18.631163   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.424908   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
	I0925 11:30:17.424975   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:17.439514   57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:30:17.449686   57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:30:17.460096   57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:30:17.460147   57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0925 11:30:17.622252   57426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0925 11:30:17.662261   57426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I0925 11:30:17.759764   57426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:30:20.493076   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:22.995449   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:21.130370   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:23.137540   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:24.792048   57927 pod_ready.go:81] duration metric: took 4m0.000079144s waiting for pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:24.792097   57927 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:24.792110   57927 pod_ready.go:38] duration metric: took 4m9.506946432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:24.792141   57927 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:30:24.792215   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:24.824238   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:24.824320   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:24.843686   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:24.843764   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:24.868292   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:24.868377   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:24.892540   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:24.892617   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:24.919019   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:24.919091   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:24.946855   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:24.946930   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:24.989142   57927 logs.go:284] 0 containers: []
	W0925 11:30:24.989168   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:24.989220   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:25.011261   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:25.011345   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:25.030950   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:25.030977   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:25.030989   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:25.120210   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:25.120239   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:25.152215   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:25.152243   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:25.194959   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:25.194997   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:25.229067   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:25.229094   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:25.256359   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:25.256386   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:25.280428   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:25.280459   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:25.330876   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:25.330902   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:25.353121   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:25.353148   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:25.375127   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:25.375154   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:25.402664   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:25.402690   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:25.428214   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:25.428238   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:25.509982   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:25.510015   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:25.525584   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:25.525623   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:25.696377   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:25.696402   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:25.734242   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:25.734271   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:25.763410   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:25.763436   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:25.797529   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:25.797556   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:25.843899   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:25.843927   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:25.896478   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:25.896507   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:28.465765   57927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:28.480996   57927 api_server.go:72] duration metric: took 4m15.769590927s to wait for apiserver process to appear ...
	I0925 11:30:28.481023   57927 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:28.481101   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:25.631323   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:28.129055   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.749642   57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0925 11:30:30.749742   57426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:30:30.749858   57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:30:30.749944   57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:30:30.750021   57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:30:30.750109   57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:30:30.750191   57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:30:30.750247   57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0925 11:30:30.750371   57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:30:30.751913   57426 out.go:204]   - Generating certificates and keys ...
	I0925 11:30:30.752003   57426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:30:30.752119   57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:30:30.752232   57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:30:30.752318   57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:30:30.752414   57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:30:30.752468   57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:30:30.752570   57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:30:30.752681   57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:30:30.752781   57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:30:30.752890   57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:30:30.752940   57426 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:30:30.753020   57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:30:30.753090   57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:30:30.753154   57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:30:30.753251   57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:30:30.753324   57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:30:30.753398   57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:30:30.755107   57426 out.go:204]   - Booting up control plane ...
	I0925 11:30:30.755208   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:30:30.755334   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:30:30.755426   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:30:30.755500   57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:30:30.755652   57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:30:30.755754   57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
	I0925 11:30:30.755912   57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:30:30.756083   57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:30:30.756182   57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:30:30.756384   57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 11:30:30.756471   57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
	I0925 11:30:28.509506   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:28.509575   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:28.532621   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:28.532723   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:28.554799   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:28.554878   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:28.574977   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:28.575048   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:28.596014   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:28.596094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:28.616627   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:28.616712   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:28.636762   57927 logs.go:284] 0 containers: []
	W0925 11:30:28.636782   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:28.636838   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:28.659028   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:28.659094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:28.680689   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:28.680722   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:28.680736   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:28.714051   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:28.714078   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:28.762170   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:28.762204   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:28.788343   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:28.788371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:28.869517   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:28.869548   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:29.002897   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:29.002920   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:29.032416   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:29.032444   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:29.063893   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:29.063921   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:29.089890   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:29.089916   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:29.132797   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:29.132827   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:29.155350   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:29.155371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:29.213418   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:29.213447   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:29.254863   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:29.254891   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:29.277677   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:29.277709   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:29.308393   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:29.308422   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:29.330968   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:29.330989   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:29.374515   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:29.374542   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:29.399946   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:29.399975   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:29.445837   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:29.445870   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:29.468320   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:29.468346   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.042767   57927 api_server.go:253] Checking apiserver healthz at https://192.168.61.208:8444/healthz ...
	I0925 11:30:32.048546   57927 api_server.go:279] https://192.168.61.208:8444/healthz returned 200:
	ok
	I0925 11:30:32.052014   57927 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:32.052036   57927 api_server.go:131] duration metric: took 3.571006059s to wait for apiserver health ...
	I0925 11:30:32.052046   57927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:32.052108   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:32.083762   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:32.083848   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:32.106317   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:32.106392   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:32.128245   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:32.128333   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:32.148973   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:32.149052   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:32.174028   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:32.174103   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:32.196115   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:32.196181   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:32.216678   57927 logs.go:284] 0 containers: []
	W0925 11:30:32.216702   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:32.216757   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:32.237388   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:32.237473   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:32.257088   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:32.257112   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:32.257122   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.327894   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:32.327929   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:32.365128   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:32.365156   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:32.394664   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:32.394703   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:32.450709   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:32.450737   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:32.512407   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:32.512442   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:32.602958   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:32.602985   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:32.646449   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:32.646478   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:32.693817   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:32.693843   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:32.728336   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:32.728364   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:32.754018   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:32.754053   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:32.791438   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:32.791473   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:32.813473   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:32.813501   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:32.827795   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:32.827824   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:32.862910   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:32.862934   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:32.899610   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:32.899642   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:32.941021   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:32.941056   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:33.072749   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:33.072786   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:33.105984   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:33.106016   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:33.132338   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:33.132366   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:30.629720   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:33.133383   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.758173   57426 out.go:204]   - Configuring RBAC rules ...
	I0925 11:30:30.758310   57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:30:30.758487   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:30:30.758649   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:30:30.758810   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:30:30.758962   57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:30:30.759033   57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:30:30.759112   57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:30:30.759121   57426 kubeadm.go:322] 
	I0925 11:30:30.759191   57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:30:30.759205   57426 kubeadm.go:322] 
	I0925 11:30:30.759275   57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:30:30.759285   57426 kubeadm.go:322] 
	I0925 11:30:30.759329   57426 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:30:30.759379   57426 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:30:30.759421   57426 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:30:30.759429   57426 kubeadm.go:322] 
	I0925 11:30:30.759483   57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:30:30.759595   57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:30:30.759689   57426 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:30:30.759705   57426 kubeadm.go:322] 
	I0925 11:30:30.759821   57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0925 11:30:30.759962   57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:30:30.759977   57426 kubeadm.go:322] 
	I0925 11:30:30.760084   57426 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760216   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:30:30.760255   57426 kubeadm.go:322]     --control-plane 	  
	I0925 11:30:30.760264   57426 kubeadm.go:322] 
	I0925 11:30:30.760361   57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:30:30.760370   57426 kubeadm.go:322] 
	I0925 11:30:30.760469   57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760617   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:30:30.760630   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:30:30.760655   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:30:30.760693   57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:30:30.760827   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.760899   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.820984   57426 ops.go:34] apiserver oom_adj: -16
	I0925 11:30:31.034555   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.165894   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.768765   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.269393   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.768687   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.269126   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.768794   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.269149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.769469   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.268685   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.664427   57927 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:35.664451   57927 system_pods.go:61] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.664456   57927 system_pods.go:61] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.664461   57927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.664466   57927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.664473   57927 system_pods.go:61] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.664479   57927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.664489   57927 system_pods.go:61] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.664507   57927 system_pods.go:61] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.664518   57927 system_pods.go:74] duration metric: took 3.612465435s to wait for pod list to return data ...
	I0925 11:30:35.664526   57927 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:35.669238   57927 default_sa.go:45] found service account: "default"
	I0925 11:30:35.669258   57927 default_sa.go:55] duration metric: took 4.728219ms for default service account to be created ...
	I0925 11:30:35.669266   57927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:35.677224   57927 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:35.677248   57927 system_pods.go:89] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.677254   57927 system_pods.go:89] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.677260   57927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.677265   57927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.677269   57927 system_pods.go:89] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.677273   57927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.677279   57927 system_pods.go:89] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.677285   57927 system_pods.go:89] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.677291   57927 system_pods.go:126] duration metric: took 8.021227ms to wait for k8s-apps to be running ...
	I0925 11:30:35.677301   57927 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:35.677340   57927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:35.696637   57927 system_svc.go:56] duration metric: took 19.327902ms WaitForService to wait for kubelet.
	I0925 11:30:35.696659   57927 kubeadm.go:581] duration metric: took 4m22.985262397s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:35.696712   57927 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:35.701675   57927 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:35.701709   57927 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:35.701719   57927 node_conditions.go:105] duration metric: took 4.999654ms to run NodePressure ...
	I0925 11:30:35.701730   57927 start.go:228] waiting for startup goroutines ...
	I0925 11:30:35.701737   57927 start.go:233] waiting for cluster config update ...
	I0925 11:30:35.701749   57927 start.go:242] writing updated cluster config ...
	I0925 11:30:35.702076   57927 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:35.751111   57927 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:35.753033   57927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-319133" cluster and "default" namespace by default
	I0925 11:30:35.134183   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:37.629084   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:35.769384   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.269510   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.768848   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.268799   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.769259   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.269444   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.769081   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.269471   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.768795   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:40.269215   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.631655   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:42.128083   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:40.768992   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.269161   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.768782   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.269438   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.769149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.268490   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.768911   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.269363   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.769428   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.268548   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.769489   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:46.046613   57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
	I0925 11:30:46.046655   57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
	I0925 11:30:46.046676   57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.046764   57426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:30:46.048206   57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.048432   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:30:46.048574   57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:30:46.048644   57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048653   57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048678   57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
	I0925 11:30:46.048687   57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
	W0925 11:30:46.048690   57426 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:30:46.048698   57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
	W0925 11:30:46.048709   57426 addons.go:240] addon dashboard should already be in state true
	I0925 11:30:46.048720   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.048735   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048744   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048818   57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048847   57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
	W0925 11:30:46.048855   57426 addons.go:240] addon metrics-server should already be in state true
	I0925 11:30:46.048680   57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
	I0925 11:30:46.048796   57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:30:46.048888   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048935   57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:30:46.048944   57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
	I0925 11:30:46.048955   57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:30:46.048963   57426 cache.go:87] Successfully saved all images to host disk.
	I0925 11:30:46.049135   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.049144   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049162   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049168   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049183   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049247   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049260   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049320   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049333   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049505   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049555   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.072180   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0925 11:30:46.072238   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0925 11:30:46.072269   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0925 11:30:46.072356   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0925 11:30:46.072357   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0925 11:30:46.072696   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072776   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072860   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073344   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073364   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073496   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073756   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073762   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.073964   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074195   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074210   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074253   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074286   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074439   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074467   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074610   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074656   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074686   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074715   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074930   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.075069   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.075101   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.075234   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.075269   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.075582   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.075811   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.077659   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.077697   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.094611   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0925 11:30:46.097022   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0925 11:30:46.097145   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097460   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097748   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.097767   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098172   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.098314   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.098333   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098564   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.098618   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.099229   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.101256   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.103863   57426 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:30:46.102124   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.102436   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0925 11:30:46.106576   57426 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:30:46.105560   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.109500   57426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:30:46.108220   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:30:46.108845   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.110913   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.110969   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:30:46.110985   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.110999   57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.111011   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:30:46.111024   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.112450   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.112637   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.112839   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:30:46.112862   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.115509   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.115949   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.115983   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116123   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116214   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116253   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.116342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.116466   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.116484   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.116508   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116774   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116925   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.117104   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.117252   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.119073   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119413   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.119430   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119685   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.119854   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.120011   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.120148   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.127174   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0925 11:30:46.127843   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.128399   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.128428   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.128967   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.129155   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.129945   57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
	I0925 11:30:46.129977   57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:30:46.131741   57426 out.go:177] * Verifying Kubernetes components...
	I0925 11:30:46.133087   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:46.130848   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.134728   57426 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:30:44.129372   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.133247   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.630362   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.136080   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:30:46.136097   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:30:46.136115   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.139231   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139692   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.139718   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139957   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.140113   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.140252   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.140377   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.147885   57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
	W0925 11:30:46.147907   57426 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:30:46.147934   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.148356   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.148384   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.173474   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0925 11:30:46.174243   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.174879   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.174900   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.176033   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.176694   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.176736   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.196631   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I0925 11:30:46.197107   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.197645   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.197665   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.198067   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.198270   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.200093   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.200354   57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.200371   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:30:46.200390   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.203486   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.203884   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.203998   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.204172   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.204342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.204489   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.204636   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.413931   57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.414008   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:30:46.416569   57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
	I0925 11:30:46.416586   57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.416594   57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:46.420795   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:46.484507   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:30:46.484532   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:30:46.532417   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:30:46.532443   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:30:46.575299   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:30:46.575317   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:30:46.595994   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.596018   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:30:46.652448   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0925 11:30:46.652473   57426 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:30:46.652480   57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
	I0925 11:30:46.652483   57426 cache_images.go:263] failed pushing to: 
	I0925 11:30:46.652504   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.652518   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.652957   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:46.652963   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.652991   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.653007   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.653020   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.653288   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.653304   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.705521   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.707099   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.712115   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:30:46.712134   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:30:46.762833   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.851711   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:30:46.851753   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:30:47.115165   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:30:47.115193   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:30:47.386363   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:30:47.386386   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:30:47.610468   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:30:47.610490   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:30:47.697559   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:30:47.697578   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:30:47.864150   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:30:47.864169   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:30:47.915917   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:47.915945   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:30:48.000793   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
	I0925 11:30:48.000836   57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0925 11:30:48.085411   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:48.190617   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
	I0925 11:30:48.190677   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.190691   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.191035   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.191056   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.191068   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.191078   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.192850   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.192853   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.192876   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.192885   57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
	I0925 11:30:48.465209   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.575177   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
	I0925 11:30:48.575232   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575246   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575181   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
	I0925 11:30:48.575317   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575328   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575540   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575560   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575570   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575579   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575635   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575742   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575772   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575781   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575789   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575797   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575878   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575903   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575911   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577345   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577384   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577406   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577435   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.577451   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.577940   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577944   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577964   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.298546   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
	I0925 11:30:49.298606   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.298628   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302266   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302272   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302307   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.302321   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.302331   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302655   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302695   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302717   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.304441   57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694015 addons enable metrics-server	
	
	
	I0925 11:30:49.306061   57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0925 11:30:49.307539   57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0925 11:30:50.630959   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.128983   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:50.940378   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.436796   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.437380   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.131064   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.628873   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.449840   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.938237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.629445   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.129311   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.937614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.627904   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.629258   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:08.629473   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.937878   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:09.437807   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.128681   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:13.129731   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.939073   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:14.437620   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:15.628774   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:17.630838   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:16.938666   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:19.437732   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:20.139603   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:22.629587   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:21.938151   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:23.938328   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:25.130178   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:27.628803   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:26.439526   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:28.937508   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:29.631037   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:32.128151   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:30.943648   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:33.437428   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:35.438086   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:34.129227   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:36.129294   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:38.629985   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:37.439039   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:39.442448   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.129913   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.631099   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.937237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.939282   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.128833   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.628446   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.438561   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.938598   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.629674   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:53.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.938694   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:52.939141   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.438245   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.629903   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:58.128851   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:57.937434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.437596   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.129216   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.629241   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.437909   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.438109   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.629284   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:07.128455   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:06.438145   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:08.938681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:09.129543   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.629259   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:13.438614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:14.130657   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:16.629579   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:15.938889   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:18.438798   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:19.129812   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:21.630003   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:20.937670   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:22.938056   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.938180   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.128380   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.630164   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.938537   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.938993   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:31.127679   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.128750   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:30.939782   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.438287   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.438564   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.128786   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.129289   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.938062   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:40.438394   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:39.129627   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:41.131250   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:43.629234   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:42.439143   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:44.938221   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:45.630527   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.128292   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:46.940247   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.940644   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:50.128630   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:52.129574   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:51.437686   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:53.438013   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:55.438473   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:54.629843   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.128814   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.939231   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:00.438636   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:59.633169   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.129926   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.937519   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.937631   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.629189   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:06.629835   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:08.629868   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:07.436605   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:09.437297   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.128030   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.128211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.438337   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.939288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:14.611278   59899 pod_ready.go:81] duration metric: took 4m0.000327599s waiting for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:14.611332   59899 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:33:14.611349   59899 pod_ready.go:38] duration metric: took 4m12.007655968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:14.611376   59899 kubeadm.go:640] restartCluster took 4m31.218254898s
	W0925 11:33:14.611443   59899 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:33:14.611477   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 11:33:15.940496   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:18.440278   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:23.826236   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.214737742s)
	I0925 11:33:23.826300   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:23.840564   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:33:23.850760   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:33:23.860161   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:33:23.860203   59899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 11:33:20.938819   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:22.939228   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.940142   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.111104   59899 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:33:27.440968   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:29.937681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:33.957801   59899 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 11:33:33.957861   59899 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:33:33.957964   59899 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:33:33.958127   59899 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:33:33.958257   59899 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:33:33.958352   59899 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:33:33.961247   59899 out.go:204]   - Generating certificates and keys ...
	I0925 11:33:33.961330   59899 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:33:33.961381   59899 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:33:33.961482   59899 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:33:33.961584   59899 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:33:33.961691   59899 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:33:33.961764   59899 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:33:33.961860   59899 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:33:33.961946   59899 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:33:33.962038   59899 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:33:33.962141   59899 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:33:33.962189   59899 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:33:33.962274   59899 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:33:33.962342   59899 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:33:33.962404   59899 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:33:33.962484   59899 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:33:33.962596   59899 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:33:33.962722   59899 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:33:33.962812   59899 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:33:33.964227   59899 out.go:204]   - Booting up control plane ...
	I0925 11:33:33.964334   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:33:33.964411   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:33:33.964484   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:33:33.964622   59899 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:33:33.964767   59899 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:33:33.964843   59899 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 11:33:33.964974   59899 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:33:33.965033   59899 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004093 seconds
	I0925 11:33:33.965122   59899 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:33:33.965219   59899 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:33:33.965300   59899 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:33:33.965551   59899 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-094323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 11:33:33.965631   59899 kubeadm.go:322] [bootstrap-token] Using token: jxl01o.6st4cg36x4e3zwsq
	I0925 11:33:33.968152   59899 out.go:204]   - Configuring RBAC rules ...
	I0925 11:33:33.968255   59899 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:33:33.968324   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 11:33:33.968463   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:33:33.968579   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:33:33.968719   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:33:33.968841   59899 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:33:33.968990   59899 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 11:33:33.969057   59899 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:33:33.969115   59899 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:33:33.969125   59899 kubeadm.go:322] 
	I0925 11:33:33.969197   59899 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:33:33.969206   59899 kubeadm.go:322] 
	I0925 11:33:33.969302   59899 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:33:33.969309   59899 kubeadm.go:322] 
	I0925 11:33:33.969339   59899 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:33:33.969409   59899 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:33:33.969481   59899 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:33:33.969494   59899 kubeadm.go:322] 
	I0925 11:33:33.969577   59899 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 11:33:33.969592   59899 kubeadm.go:322] 
	I0925 11:33:33.969652   59899 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 11:33:33.969661   59899 kubeadm.go:322] 
	I0925 11:33:33.969721   59899 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:33:33.969820   59899 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:33:33.969931   59899 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:33:33.969945   59899 kubeadm.go:322] 
	I0925 11:33:33.970020   59899 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 11:33:33.970079   59899 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:33:33.970085   59899 kubeadm.go:322] 
	I0925 11:33:33.970149   59899 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970246   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:33:33.970273   59899 kubeadm.go:322] 	--control-plane 
	I0925 11:33:33.970286   59899 kubeadm.go:322] 
	I0925 11:33:33.970379   59899 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:33:33.970391   59899 kubeadm.go:322] 
	I0925 11:33:33.970473   59899 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970561   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:33:33.970570   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:33:33.970583   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:33:33.973276   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:33:33.974771   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:33:33.991169   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:33:34.014483   59899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:33:34.014576   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.014605   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=embed-certs-094323 minikube.k8s.io/updated_at=2023_09_25T11_33_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:31.938903   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.438342   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.061656   59899 ops.go:34] apiserver oom_adj: -16
	I0925 11:33:34.486947   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.586316   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.181870   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.682572   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.182427   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.682439   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.182278   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.682264   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.181892   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.681964   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.938434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.437659   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.181618   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:39.682052   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.181879   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.682579   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.182334   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.682270   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.181757   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.682314   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.181975   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.682310   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.438288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:43.937112   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:44.182254   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:44.682566   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.181651   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.681891   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.783591   59899 kubeadm.go:1081] duration metric: took 11.769084129s to wait for elevateKubeSystemPrivileges.
	I0925 11:33:45.783631   59899 kubeadm.go:406] StartCluster complete in 5m2.419220731s
	I0925 11:33:45.783654   59899 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.783749   59899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:33:45.785139   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.785373   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:33:45.785497   59899 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:33:45.785578   59899 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-094323"
	I0925 11:33:45.785591   59899 addons.go:69] Setting default-storageclass=true in profile "embed-certs-094323"
	I0925 11:33:45.785600   59899 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-094323"
	W0925 11:33:45.785608   59899 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:33:45.785610   59899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-094323"
	I0925 11:33:45.785613   59899 addons.go:69] Setting metrics-server=true in profile "embed-certs-094323"
	I0925 11:33:45.785629   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.785624   59899 addons.go:69] Setting dashboard=true in profile "embed-certs-094323"
	I0925 11:33:45.785641   59899 addons.go:231] Setting addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:45.785649   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	W0925 11:33:45.785652   59899 addons.go:240] addon metrics-server should already be in state true
	I0925 11:33:45.785661   59899 addons.go:231] Setting addon dashboard=true in "embed-certs-094323"
	W0925 11:33:45.785671   59899 addons.go:240] addon dashboard should already be in state true
	I0925 11:33:45.785702   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785726   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785720   59899 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:33:45.785794   59899 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:33:45.785804   59899 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 97.126µs
	I0925 11:33:45.785813   59899 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:33:45.785842   59899 cache.go:87] Successfully saved all images to host disk.
	I0925 11:33:45.786040   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.786074   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786077   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786103   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786119   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786100   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786148   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786175   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786226   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786382   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786458   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.804658   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0925 11:33:45.804729   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0925 11:33:45.804829   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0925 11:33:45.805237   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805268   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805835   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.805855   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806126   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0925 11:33:45.806245   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.806461   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.806533   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806584   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806593   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806608   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806726   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0925 11:33:45.806958   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806973   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807052   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.807117   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807146   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807158   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807335   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.807552   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807628   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807655   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807678   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.807709   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808075   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808113   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808146   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.808643   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808695   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.809669   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.809713   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.815794   59899 addons.go:231] Setting addon default-storageclass=true in "embed-certs-094323"
	W0925 11:33:45.815817   59899 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:33:45.815845   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.816191   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.816218   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.818468   59899 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-094323" context rescaled to 1 replicas
	I0925 11:33:45.818498   59899 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:33:45.820484   59899 out.go:177] * Verifying Kubernetes components...
	I0925 11:33:45.821970   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:45.827608   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0925 11:33:45.827764   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0925 11:33:45.828140   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828192   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828742   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828756   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.828865   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828875   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.829243   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829291   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829499   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829508   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829541   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0925 11:33:45.830368   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.830816   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.830834   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.830898   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0925 11:33:45.831336   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.831343   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.831544   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.831741   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:33:45.831767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.831896   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.831910   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.831962   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.832006   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.834683   59899 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:33:45.833215   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.835296   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836115   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.836132   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836140   59899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:33:45.835941   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.837552   59899 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:45.837565   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:33:45.837580   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836081   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:33:45.837626   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:33:45.837640   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.837722   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.838263   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.838449   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.840153   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.841675   59899 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:33:45.843211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0925 11:33:45.841916   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842082   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842734   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.842915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.843565   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.844615   59899 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:33:45.845951   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:33:45.845966   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:33:45.845980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.844700   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.844729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.846027   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844863   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.846043   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844886   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.845165   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.846085   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.846265   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846317   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846412   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.846432   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.847139   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.847153   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.847192   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.848989   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849283   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.849314   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849456   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.849635   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.849777   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.849913   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.862447   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0925 11:33:45.862828   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.863295   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.863325   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.863706   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.863888   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.865511   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.865802   59899 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:45.865821   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:33:45.865840   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.868353   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868774   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.868808   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868936   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.869132   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.869260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.869371   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:46.090766   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:33:46.090794   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:33:46.148251   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:46.244486   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:46.246747   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:33:46.246767   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:33:46.285706   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:33:46.285733   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:33:46.399367   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:33:46.399389   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:33:46.454580   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.454598   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:33:46.478692   59899 node_ready.go:35] waiting up to 6m0s for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.478749   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:33:46.478754   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:33:46.478763   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:33:46.478772   59899 cache_images.go:262] succeeded pushing to: embed-certs-094323
	I0925 11:33:46.478777   59899 cache_images.go:263] failed pushing to: 
	I0925 11:33:46.478797   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.478821   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479120   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479177   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.479190   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.479200   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479138   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479623   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479632   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.495731   59899 node_ready.go:49] node "embed-certs-094323" has status "Ready":"True"
	I0925 11:33:46.495756   59899 node_ready.go:38] duration metric: took 17.032177ms waiting for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.495768   59899 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:46.502666   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:46.590707   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:33:46.590728   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:33:46.646116   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.836729   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:33:46.836758   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:33:47.081956   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:33:47.081978   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:33:47.372971   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:33:47.372999   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:33:47.548990   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:33:47.549016   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:33:47.759403   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:33:47.759425   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:33:48.094571   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.094601   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:33:48.300509   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.523994   59899 pod_ready.go:102] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:49.536334   59899 pod_ready.go:92] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.536354   59899 pod_ready.go:81] duration metric: took 3.03366041s waiting for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.536365   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.539583   59899 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539613   59899 pod_ready.go:81] duration metric: took 3.241249ms waiting for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:49.539624   59899 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539633   59899 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549714   59899 pod_ready.go:92] pod "etcd-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.549731   59899 pod_ready.go:81] duration metric: took 10.090379ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549742   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554903   59899 pod_ready.go:92] pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.554917   59899 pod_ready.go:81] duration metric: took 5.167429ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554927   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564229   59899 pod_ready.go:92] pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.564249   59899 pod_ready.go:81] duration metric: took 9.314363ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564261   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.568126   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.41983793s)
	I0925 11:33:49.568187   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.323661752s)
	I0925 11:33:49.568232   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568239   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.089462417s)
	I0925 11:33:49.568251   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568256   59899 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0925 11:33:49.568301   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568319   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568360   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.922215522s)
	I0925 11:33:49.568392   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568407   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568608   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568626   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568637   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568643   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568674   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568685   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568689   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568695   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568697   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568704   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568646   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568716   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568725   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568959   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.569003   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569015   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569016   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569024   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569031   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.569036   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569045   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.569048   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569033   59899 addons.go:467] Verifying addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:49.569276   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569292   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.883443   59899 pod_ready.go:92] pod "kube-proxy-pjwm2" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.883465   59899 pod_ready.go:81] duration metric: took 319.196098ms waiting for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.883477   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292288   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:50.292314   59899 pod_ready.go:81] duration metric: took 408.829404ms waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292325   59899 pod_ready.go:38] duration metric: took 3.79654573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:50.292349   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:50.292413   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:50.390976   59899 api_server.go:72] duration metric: took 4.572446849s to wait for apiserver process to appear ...
	I0925 11:33:50.390998   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:50.391016   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:33:50.391107   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.090546724s)
	I0925 11:33:50.391160   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391179   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391539   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391540   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391568   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.391584   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391594   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391810   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391822   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391828   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.393750   59899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-094323 addons enable metrics-server	
	
	
	I0925 11:33:50.395438   59899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0925 11:33:45.939462   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:47.439176   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439201   57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.439210   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439218   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.441757   57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441785   57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.441797   57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441806   57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.447728   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447759   57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.447770   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447777   57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:47.447809   57426 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:47.447887   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:47.480326   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:47.480410   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:47.500790   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:47.500883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:47.521967   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:47.522043   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:47.542833   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:47.542921   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:47.564220   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:47.564296   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:47.585142   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:47.585233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:47.604606   57426 logs.go:284] 0 containers: []
	W0925 11:33:47.604638   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:47.604734   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:47.634903   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:47.634987   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:47.659599   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:47.659654   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:47.659677   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:47.713402   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:47.713441   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:47.746308   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:47.746347   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:47.777953   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:47.777991   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:47.933013   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:47.933041   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:47.959588   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:47.959623   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:47.989240   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:47.989285   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:48.069991   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:48.070022   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:48.107511   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.108197   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108438   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108657   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:48.109661   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.109891   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.110800   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.111045   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111291   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111524   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112518   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.112765   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112989   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113221   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113444   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113656   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113877   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.114848   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.115076   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115297   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115517   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115743   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115978   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.116194   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.148933   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.150648   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.152304   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:48.152321   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:48.170706   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:48.170735   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:48.204533   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:48.204574   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:48.242201   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:48.242239   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:48.305874   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:48.305916   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:48.375041   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375074   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:48.375130   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:48.375142   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375161   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375169   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375176   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.375185   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.375190   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375199   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:33:50.396708   59899 addons.go:502] enable addons completed in 4.611221618s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0925 11:33:50.409202   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:33:50.411339   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:33:50.411356   59899 api_server.go:131] duration metric: took 20.35197ms to wait for apiserver health ...
	I0925 11:33:50.411366   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:33:50.490420   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:33:50.490453   59899 system_pods.go:61] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.490461   59899 system_pods.go:61] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.490468   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.490476   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.490483   59899 system_pods.go:61] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.490489   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.490500   59899 system_pods.go:61] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.490515   59899 system_pods.go:61] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.490528   59899 system_pods.go:74] duration metric: took 79.155444ms to wait for pod list to return data ...
	I0925 11:33:50.490540   59899 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:33:50.691794   59899 default_sa.go:45] found service account: "default"
	I0925 11:33:50.691828   59899 default_sa.go:55] duration metric: took 201.27577ms for default service account to be created ...
	I0925 11:33:50.691838   59899 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:33:50.887600   59899 system_pods.go:86] 8 kube-system pods found
	I0925 11:33:50.887636   59899 system_pods.go:89] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.887645   59899 system_pods.go:89] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.887652   59899 system_pods.go:89] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.887662   59899 system_pods.go:89] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.887668   59899 system_pods.go:89] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.887675   59899 system_pods.go:89] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.887683   59899 system_pods.go:89] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.887694   59899 system_pods.go:89] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.887707   59899 system_pods.go:126] duration metric: took 195.862461ms to wait for k8s-apps to be running ...
	I0925 11:33:50.887718   59899 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:33:50.887769   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:50.910382   59899 system_svc.go:56] duration metric: took 22.655864ms WaitForService to wait for kubelet.
	I0925 11:33:50.910410   59899 kubeadm.go:581] duration metric: took 5.091888107s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:33:50.910429   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:33:51.083597   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:33:51.083633   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:33:51.083648   59899 node_conditions.go:105] duration metric: took 173.214402ms to run NodePressure ...
	I0925 11:33:51.083660   59899 start.go:228] waiting for startup goroutines ...
	I0925 11:33:51.083670   59899 start.go:233] waiting for cluster config update ...
	I0925 11:33:51.083682   59899 start.go:242] writing updated cluster config ...
	I0925 11:33:51.084016   59899 ssh_runner.go:195] Run: rm -f paused
	I0925 11:33:51.130189   59899 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:33:51.132357   59899 out.go:177] * Done! kubectl is now configured to use "embed-certs-094323" cluster and "default" namespace by default
	I0925 11:33:58.376816   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:58.397417   57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
	I0925 11:33:58.397443   57426 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:58.397517   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:58.423312   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:58.423385   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:58.443439   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:58.443499   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:58.463360   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:58.463443   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:58.486151   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:58.486228   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:58.507009   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:58.507095   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:58.525571   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:58.525647   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:58.542397   57426 logs.go:284] 0 containers: []
	W0925 11:33:58.542424   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:58.542481   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:58.562186   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:58.562260   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:58.580984   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:58.581014   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:58.581030   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:58.731921   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:58.731958   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:58.759982   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:58.760017   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:58.817088   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:58.817120   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:58.851581   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.852006   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852226   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:58.853080   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.853245   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.853866   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.854027   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854211   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854408   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855047   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.855223   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855403   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855601   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855811   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856008   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856210   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856868   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.857032   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857219   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857418   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857616   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857814   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.858011   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.889357   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:58.891108   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:58.893160   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:58.893178   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:58.927223   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:58.927264   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:58.951343   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:58.951376   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:58.979268   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:58.979303   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:59.010031   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:59.010059   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:59.050333   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:59.050367   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:59.093782   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:59.093820   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:59.118196   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:59.118222   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:59.228267   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:59.228306   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:59.247426   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247459   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:59.247517   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:59.247534   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247545   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247554   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247563   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:59.247574   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:59.247584   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247597   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:09.249955   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:34:09.256612   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0925 11:34:09.257809   57426 api_server.go:141] control plane version: v1.16.0
	I0925 11:34:09.257827   57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
	I0925 11:34:09.257833   57426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:34:09.257883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:34:09.280149   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:34:09.280233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:34:09.300127   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:34:09.300211   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:34:09.332581   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:34:09.332656   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:34:09.352994   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:34:09.353061   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:34:09.374892   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:34:09.374960   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:34:09.395820   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:34:09.395884   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:34:09.414225   57426 logs.go:284] 0 containers: []
	W0925 11:34:09.414245   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:34:09.414284   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:34:09.434336   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:34:09.434398   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:34:09.456185   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:34:09.456218   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:34:09.456231   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:34:09.590378   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:34:09.590409   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:34:09.617599   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:34:09.617624   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:34:09.643431   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:34:09.643459   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:34:09.665103   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:34:09.665129   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:34:09.693931   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:34:09.693963   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:34:09.742784   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:34:09.742812   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:34:09.804145   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:34:09.804177   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:34:09.818586   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:34:09.818609   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:34:09.857846   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:34:09.857875   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:34:09.880799   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:34:09.880828   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:34:09.950547   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:34:09.950572   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:34:09.983084   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.983479   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983617   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983758   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:34:09.984405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.984547   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985367   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.985576   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985713   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985898   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986632   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.986786   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986945   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987132   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987279   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987469   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987663   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988255   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.988398   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988533   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988685   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988822   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988958   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.989093   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.020550   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.022302   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.024541   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:34:10.024558   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:34:10.053454   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053477   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:34:10.053524   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:34:10.053535   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053543   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053551   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053557   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.053563   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.053568   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053573   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:20.061232   57426 system_pods.go:59] 8 kube-system pods found
	I0925 11:34:20.061260   57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.061267   57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.061271   57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.061277   57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.061284   57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.061288   57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.061295   57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.061300   57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.061307   57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
	I0925 11:34:20.061314   57426 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:34:20.064090   57426 default_sa.go:45] found service account: "default"
	I0925 11:34:20.064114   57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
	I0925 11:34:20.064123   57426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:34:20.068614   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.068644   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.068653   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.068674   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.068682   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.068690   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.068696   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.068707   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.068719   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.068739   57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.275900   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.275943   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.275952   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.275960   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.275967   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.275974   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.275982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.275992   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.276001   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.276021   57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.579425   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.579469   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.579480   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.579489   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.579497   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.579506   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.579513   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.579522   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.579531   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.579553   57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.024313   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.024351   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.024360   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.024365   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.024372   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.024381   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.024390   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.024401   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.024411   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.024428   57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.536419   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.536449   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.536460   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.536466   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.536470   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.536476   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.536480   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.536486   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.536492   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.536506   57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.027728   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.027766   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.027776   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.027783   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.027787   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.027796   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.027804   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.027814   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.027822   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.027838   57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.714282   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.714315   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.714326   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.714335   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.714342   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.714349   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.714354   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.714365   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.714381   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.714399   57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
	I0925 11:34:23.438829   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:23.438855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:23.438862   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:23.438867   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:23.438872   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:23.438877   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:23.438882   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:23.438891   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:23.438898   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:23.438912   57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
	I0925 11:34:24.724821   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:24.724855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:24.724864   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:24.724871   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:24.724878   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:24.724887   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:24.724894   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:24.724904   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:24.724919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:24.724942   57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
	I0925 11:34:26.488127   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:26.488156   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:26.488163   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:26.488182   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:26.488203   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:26.488213   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:26.488222   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:26.488232   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:26.488247   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:26.488266   57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
	I0925 11:34:27.921755   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:27.921783   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:27.921790   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:27.921795   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:27.921800   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:27.921805   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:27.921810   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:27.921815   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:27.921821   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:27.921835   57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
	I0925 11:34:29.885748   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:29.885776   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:29.885783   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:29.885789   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:29.885794   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:29.885799   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:29.885803   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:29.885810   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:29.885815   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:29.885830   57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
	I0925 11:34:32.946353   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:32.946383   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:32.946391   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:32.946396   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:32.946401   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:32.946406   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:32.946410   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:32.946416   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:32.946421   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:32.946434   57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
	I0925 11:34:36.713729   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:36.713754   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:36.713761   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:36.713767   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:36.713772   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:36.713777   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:36.713781   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:36.713788   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:36.713793   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:36.713807   57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
	I0925 11:34:41.454464   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:41.454492   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:41.454498   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:41.454503   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:41.454508   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:41.454513   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:41.454518   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:41.454524   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:41.454529   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:41.454542   57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
	I0925 11:34:46.159214   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:46.159255   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:46.159266   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:46.159275   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:46.159282   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:46.159292   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:46.159299   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:46.159314   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:46.159328   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:46.159350   57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
	I0925 11:34:51.672849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:51.672877   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:51.672884   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:51.672889   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:51.672894   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:51.672899   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:51.672905   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:51.672914   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:51.672919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:51.672933   57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
	I0925 11:34:59.936057   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:59.936086   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:59.936094   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:59.936099   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:59.936104   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:59.936109   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:59.936114   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:59.936119   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:59.936125   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:59.936139   57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
	I0925 11:35:09.479385   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:09.479413   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:09.479420   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:09.479428   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:09.479433   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:09.479441   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:09.479446   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:09.479452   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:09.479459   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:09.479471   57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
	I0925 11:35:22.964926   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:22.964955   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:22.964962   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:22.964967   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:22.964972   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:22.964977   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:22.964982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:22.964988   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:22.964993   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:22.965006   57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
	I0925 11:35:37.171988   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:37.172022   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:37.172034   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:37.172041   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:37.172048   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:37.172055   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:37.172061   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:37.172072   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:37.172083   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:37.172101   57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
	I0925 11:35:54.452675   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:54.452702   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:54.452709   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:54.452714   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:54.452719   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:54.452727   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:54.452731   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:54.452738   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:54.452743   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:54.452756   57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
	I0925 11:36:22.755662   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:22.755700   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:22.755710   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:22.755718   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:22.755724   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:22.755732   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:22.755746   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:22.755761   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:22.755771   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:22.755791   57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
	I0925 11:36:58.289849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:58.289887   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:58.289896   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:58.289901   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:58.289910   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:58.289919   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:58.289927   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:58.289939   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:58.289950   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:58.289971   57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
	I0925 11:37:42.356673   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:37:42.356698   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:37:42.356705   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:37:42.356710   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:37:42.356715   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:37:42.356721   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:37:42.356725   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:37:42.356731   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:37:42.356736   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:37:42.356752   57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
	I0925 11:38:30.124408   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:38:30.124436   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:38:30.124443   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:38:30.124449   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:38:30.124454   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:38:30.124459   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:38:30.124464   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:38:30.124470   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:38:30.124475   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:38:30.124490   57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
	I0925 11:39:18.680525   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:39:18.680555   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:39:18.680561   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:39:18.680567   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:39:18.680572   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:39:18.680578   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:39:18.680582   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:39:18.680589   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:39:18.680594   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:39:18.680607   57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
	I0925 11:40:11.783486   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:40:11.783513   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:40:11.783520   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:40:11.783527   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:40:11.783532   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:40:11.783537   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:40:11.783542   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:40:11.783548   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:40:11.783553   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:40:11.786119   57426 out.go:177] 
	W0925 11:40:11.787697   57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W0925 11:40:11.787711   57426 out.go:239] * 
	W0925 11:40:11.788461   57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:40:11.790057   57426 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:49:14 UTC. --
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572406518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572497492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572525871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572544812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618491365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618680379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618696521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618704838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155674989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155883992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156004251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156243152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.045907108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046033975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046090982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046108215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.109068079Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462862941Z" level=info msg="shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462964770Z" level=warning msg="cleaning up after shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462982909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.463078511Z" level=info msg="ignoring event" container=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824501229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824684623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824701374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824713075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                        COMMAND                  CREATED          STATUS                      PORTS     NAMES
	0f9de8bda7fb   kubernetesui/dashboard       "/dashboard --insecu…"   18 minutes ago   Up 18 minutes                         k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	5d3673792ccf   registry.k8s.io/echoserver   "nginx -g 'daemon of…"   18 minutes ago   Exited (1) 18 minutes ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	90dc66317fc1   6e38f40d628d                 "/storage-provisioner"   18 minutes ago   Up 18 minutes                         k8s_storage-provisioner_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	b16fb26ba287   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	4eb82cb0fa23   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	802d2fbd8809   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	6a94e2e5690b   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_metrics-server-74d5856cc6-wbskx_kube-system_5925c507-8225-4b9c-b89e-13346451d090_0
	c4e353aa787b   bf261d157914                 "/coredns -conf /etc…"   18 minutes ago   Up 18 minutes                         k8s_coredns_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	2bccdb65c1cc   c21b0c7400f9                 "/usr/local/bin/kube…"   18 minutes ago   Up 18 minutes                         k8s_kube-proxy_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	2088f3a7c0bc   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	75c3319baa09   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	eb63d31189ed   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Created                               k8s_POD_coredns-5644d7b6d9-rn247_kube-system_f0e633d0-75fb-4406-928a-ec680c4052fa_0
	4b655f8475a9   b2756210eeab                 "etcd --advertise-cl…"   18 minutes ago   Up 18 minutes                         k8s_etcd_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	08dbfa6061b3   301ddc62b80b                 "kube-scheduler --au…"   18 minutes ago   Up 18 minutes                         k8s_kube-scheduler_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	59225a8740b7   06a629a7e51c                 "kube-controller-man…"   18 minutes ago   Up 18 minutes                         k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	34825b8222f1   b305571ca60a                 "kube-apiserver --ad…"   18 minutes ago   Up 18 minutes                         k8s_kube-apiserver_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	5b274efecb4d   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	6e623a69a033   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	961cf08898d9   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	713ec26ea888   k8s.gcr.io/pause:3.1         "/pause"                 18 minutes ago   Up 18 minutes                         k8s_POD_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	time="2023-09-25T11:49:14Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [c4e353aa787b] <==
	* .:53
	2023-09-25T11:30:47.501Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-25T11:30:47.501Z [INFO] CoreDNS-1.6.2
	2023-09-25T11:30:47.501Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-694015
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=old-k8s-version-694015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:30:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:49:10 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:49:10 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:49:10 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 25 Sep 2023 11:49:10 +0000   Mon, 25 Sep 2023 11:48:50 +0000   KubeletNotReady              PLEG is not healthy: pleg was last seen active 3m22.591287503s ago; threshold is 3m0s
	Addresses:
	  InternalIP:  192.168.50.17
	  Hostname:    old-k8s-version-694015
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 1bd5d978d1e543b686646b2c32f30862
	 System UUID:                1bd5d978-d1e5-43b6-8664-6b2c32f30862
	 Boot ID:                    5678d5b5-5910-4d2d-a245-2b8fc64bd779
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qnqxm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                etcd-old-k8s-version-694015                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-apiserver-old-k8s-version-694015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-controller-manager-old-k8s-version-694015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-proxy-gsdzk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-scheduler-old-k8s-version-694015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                metrics-server-74d5856cc6-wbskx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-mxvxx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-z674w             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-694015  Starting kube-proxy.
	  Normal  NodeReady                3m26s              kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeReady
	  Normal  NodeNotReady             25s (x2 over 15m)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.807712] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166866] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.627379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep25 11:25] systemd-fstab-generator[508]: Ignoring "noauto" for root device
	[  +0.112649] systemd-fstab-generator[519]: Ignoring "noauto" for root device
	[  +1.250517] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.395221] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.132329] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.148539] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +6.146658] systemd-fstab-generator[1181]: Ignoring "noauto" for root device
	[  +1.531877] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.077793] systemd-fstab-generator[1658]: Ignoring "noauto" for root device
	[  +0.487565] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.199945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.809912] kauditd_printk_skb: 5 callbacks suppressed
	[Sep25 11:26] hrtimer: interrupt took 6685373 ns
	[Sep25 11:30] systemd-fstab-generator[6846]: Ignoring "noauto" for root device
	[Sep25 11:31] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [4b655f8475a9] <==
	* 2023-09-25 11:30:21.348787 I | raft: a74ab9f845be4a88 became follower at term 1
	2023-09-25 11:30:21.595167 W | auth: simple token is not cryptographically signed
	2023-09-25 11:30:21.604807 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-25 11:30:21.607417 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 11:30:21.608224 I | etcdserver: a74ab9f845be4a88 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-25 11:30:21.609008 I | etcdserver/membership: added member a74ab9f845be4a88 [https://192.168.50.17:2380] to cluster e7a7808069af5882
	2023-09-25 11:30:21.609764 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-25 11:30:21.610013 I | embed: listening for metrics on http://192.168.50.17:2381
	2023-09-25 11:30:22.316022 I | raft: a74ab9f845be4a88 is starting a new election at term 1
	2023-09-25 11:30:22.316075 I | raft: a74ab9f845be4a88 became candidate at term 2
	2023-09-25 11:30:22.316089 I | raft: a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316099 I | raft: a74ab9f845be4a88 became leader at term 2
	2023-09-25 11:30:22.316104 I | raft: raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316356 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-25 11:30:22.318109 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-25 11:30:22.318162 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-25 11:30:22.318191 I | etcdserver: published {Name:old-k8s-version-694015 ClientURLs:[https://192.168.50.17:2379]} to cluster e7a7808069af5882
	2023-09-25 11:30:22.318197 I | embed: ready to serve client requests
	2023-09-25 11:30:22.318821 I | embed: ready to serve client requests
	2023-09-25 11:30:22.319844 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 11:30:22.319991 I | embed: serving client requests on 192.168.50.17:2379
	2023-09-25 11:40:22.349070 I | mvcc: store.index: compact 705
	2023-09-25 11:40:22.356379 I | mvcc: finished scheduled compaction at 705 (took 6.531112ms)
	2023-09-25 11:45:22.355942 I | mvcc: store.index: compact 946
	2023-09-25 11:45:22.358397 I | mvcc: finished scheduled compaction at 946 (took 1.629731ms)
	
	* 
	* ==> kernel <==
	*  11:49:15 up 24 min,  0 users,  load average: 0.08, 0.20, 0.23
	Linux old-k8s-version-694015 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [34825b8222f1] <==
	* I0925 11:41:26.972189       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:41:26.972655       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:41:26.973048       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:41:26.973129       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:43:26.973757       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:43:26.973871       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:43:26.974099       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:43:26.974136       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:45:26.973699       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:45:26.973970       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:45:26.974212       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:45:26.974466       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:46:26.975055       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:46:26.975165       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:46:26.975224       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:46:26.975233       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:48:26.975907       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:48:26.976230       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:48:26.976641       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:48:26.976828       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [59225a8740b7] <==
	* E0925 11:43:22.210123       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:43:33.942818       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:43:52.462196       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:44:05.945193       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:44:22.714482       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:44:37.947792       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:44:52.966721       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:45:09.949700       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:45:23.219430       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:45:41.951828       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0925 11:45:50.432124       1 node_lifecycle_controller.go:1085] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0925 11:45:53.471749       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:46:13.953840       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:46:23.724212       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:46:45.956748       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:46:53.976331       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:47:17.958827       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:47:24.228167       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:47:49.960817       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:47:54.479994       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:48:21.963142       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:48:24.732358       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:48:53.965927       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:48:54.984758       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0925 11:48:55.445015       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [2bccdb65c1cc] <==
	* W0925 11:30:47.128400       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0925 11:30:47.177538       1 node.go:135] Successfully retrieved node IP: 192.168.50.17
	I0925 11:30:47.177648       1 server_others.go:149] Using iptables Proxier.
	I0925 11:30:47.271820       1 server.go:529] Version: v1.16.0
	I0925 11:30:47.304914       1 config.go:313] Starting service config controller
	I0925 11:30:47.305050       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0925 11:30:47.305152       1 config.go:131] Starting endpoints config controller
	I0925 11:30:47.305167       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0925 11:30:47.424722       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0925 11:30:47.424968       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [08dbfa6061b3] <==
	* W0925 11:30:25.965118       1 authentication.go:79] Authentication is disabled
	I0925 11:30:25.965128       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0925 11:30:25.969940       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0925 11:30:26.032268       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:26.032513       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:26.034880       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:26.035163       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:26.035326       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:26.035758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:26.041977       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:26.042199       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:26.042371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:26.043936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:26.044107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.035540       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:27.039764       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:27.039841       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:27.044797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:27.047742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:27.047784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.049796       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:27.051510       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:27.054657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:27.058480       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:27.061633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:49:15 UTC. --
	Sep 25 11:45:23 old-k8s-version-694015 kubelet[6852]: I0925 11:45:23.129700    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m36.907156855s ago; threshold is 3m0s
	Sep 25 11:45:28 old-k8s-version-694015 kubelet[6852]: I0925 11:45:28.130633    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m41.908025449s ago; threshold is 3m0s
	Sep 25 11:45:33 old-k8s-version-694015 kubelet[6852]: I0925 11:45:33.130937    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m46.908381979s ago; threshold is 3m0s
	Sep 25 11:45:38 old-k8s-version-694015 kubelet[6852]: I0925 11:45:38.131723    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m51.9091665s ago; threshold is 3m0s
	Sep 25 11:45:43 old-k8s-version-694015 kubelet[6852]: I0925 11:45:43.132457    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m56.909916841s ago; threshold is 3m0s
	Sep 25 11:45:45 old-k8s-version-694015 kubelet[6852]: E0925 11:45:45.811396    6852 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-rn247": operation timeout: context deadline exceeded
	Sep 25 11:45:45 old-k8s-version-694015 kubelet[6852]: E0925 11:45:45.811978    6852 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-rn247": operation timeout: context deadline exceeded
	Sep 25 11:45:45 old-k8s-version-694015 kubelet[6852]: E0925 11:45:45.812044    6852 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-rn247": operation timeout: context deadline exceeded
	Sep 25 11:45:45 old-k8s-version-694015 kubelet[6852]: E0925 11:45:45.812167    6852 pod_workers.go:191] Error syncing pod f0e633d0-75fb-4406-928a-ec680c4052fa ("coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-5644d7b6d9-rn247\": operation timeout: context deadline exceeded"
	Sep 25 11:45:46 old-k8s-version-694015 kubelet[6852]: E0925 11:45:46.935752    6852 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "eb63d31189ed04e27f47b3b74e416db20460f24feabe2ce37b6e1513ffdcc8c9" for pod "coredns-5644d7b6d9-rn247_kube-system(f0e633d0-75fb-4406-928a-ec680c4052fa)" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Sep 25 11:45:48 old-k8s-version-694015 kubelet[6852]: W0925 11:45:48.133370    6852 pod_container_deletor.go:75] Container "2088f3a7c0bcfd57ae26cb4de39dd38f3d4dc77f92f2fd093d22713b0ec98374" not found in pod's containers
	Sep 25 11:46:18 old-k8s-version-694015 kubelet[6852]: E0925 11:46:18.790781    6852 remote_runtime.go:128] StopPodSandbox "eb63d31189ed04e27f47b3b74e416db20460f24feabe2ce37b6e1513ffdcc8c9" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Sep 25 11:46:18 old-k8s-version-694015 kubelet[6852]: E0925 11:46:18.791702    6852 kuberuntime_gc.go:170] Failed to stop sandbox "eb63d31189ed04e27f47b3b74e416db20460f24feabe2ce37b6e1513ffdcc8c9" before removing: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Sep 25 11:48:48 old-k8s-version-694015 kubelet[6852]: I0925 11:48:48.746747    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m0.799354391s ago; threshold is 3m0s
	Sep 25 11:48:48 old-k8s-version-694015 kubelet[6852]: I0925 11:48:48.848086    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m0.901286637s ago; threshold is 3m0s
	Sep 25 11:48:49 old-k8s-version-694015 kubelet[6852]: I0925 11:48:49.048911    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m1.102109063s ago; threshold is 3m0s
	Sep 25 11:48:49 old-k8s-version-694015 kubelet[6852]: I0925 11:48:49.449458    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m1.502658348s ago; threshold is 3m0s
	Sep 25 11:48:50 old-k8s-version-694015 kubelet[6852]: I0925 11:48:50.249974    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m2.303173639s ago; threshold is 3m0s
	Sep 25 11:48:50 old-k8s-version-694015 kubelet[6852]: I0925 11:48:50.451986    6852 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2023-09-25 11:48:50.451941356 +0000 UTC m=+1112.364536466 LastTransitionTime:2023-09-25 11:48:50.451941356 +0000 UTC m=+1112.364536466 Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m2.505173153s ago; threshold is 3m0s}
	Sep 25 11:48:51 old-k8s-version-694015 kubelet[6852]: I0925 11:48:51.850278    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m3.903476415s ago; threshold is 3m0s
	Sep 25 11:48:55 old-k8s-version-694015 kubelet[6852]: I0925 11:48:55.050865    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m7.103953631s ago; threshold is 3m0s
	Sep 25 11:49:00 old-k8s-version-694015 kubelet[6852]: I0925 11:49:00.051349    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m12.104550876s ago; threshold is 3m0s
	Sep 25 11:49:05 old-k8s-version-694015 kubelet[6852]: I0925 11:49:05.051709    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m17.104911188s ago; threshold is 3m0s
	Sep 25 11:49:10 old-k8s-version-694015 kubelet[6852]: I0925 11:49:10.052384    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m22.105581404s ago; threshold is 3m0s
	Sep 25 11:49:15 old-k8s-version-694015 kubelet[6852]: I0925 11:49:15.052798    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m27.105997962s ago; threshold is 3m0s
	
	* 
	* ==> kubernetes-dashboard [0f9de8bda7fb] <==
	* 2023/09/25 11:37:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:37:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:38:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:38:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:39:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:39:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:40:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:40:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:41:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:41:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:42:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:42:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:43:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:43:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:44:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:44:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:45:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:45:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:46:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:46:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:47:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:47:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:48:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:48:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:49:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [90dc66317fc1] <==
	* I0925 11:30:51.322039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 11:30:51.347548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 11:30:51.348062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 11:30:51.364910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 11:30:51.365497       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	I0925 11:30:51.368701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82068dcb-41ed-493c-a127-6ea04652eda5", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa became leader
	I0925 11:30:51.466721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-694015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1 (64.981987ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-qnqxm" not found
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wbskx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-d6b4b5544-mxvxx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-z674w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (133.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-z674w" [5d234114-a13f-403f-98e0-7b5fbf830fdd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0925 11:49:31.880024   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/no-preload-863905/client.crt: no such file or directory
E0925 11:49:38.698340   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/default-k8s-diff-port-319133/client.crt: no such file or directory
E0925 11:50:25.174993   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:50:27.125657   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:50:30.349971   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:50:36.423104   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:50:54.923140   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/no-preload-863905/client.crt: no such file or directory
E0925 11:50:57.911977   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:51:01.744519   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/default-k8s-diff-port-319133/client.crt: no such file or directory
E0925 11:51:19.413870   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-25 11:51:27.440834599 +0000 UTC m=+4672.608190447
start_stop_delete_test.go:287: (dbg) Run:  kubectl --context old-k8s-version-694015 describe po kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard
start_stop_delete_test.go:287: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe po kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: context deadline exceeded (1.719µs)
start_stop_delete_test.go:287: kubectl --context old-k8s-version-694015 describe po kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:287: (dbg) Run:  kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard
start_stop_delete_test.go:287: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: context deadline exceeded (194ns)
start_stop_delete_test.go:287: kubectl --context old-k8s-version-694015 logs kubernetes-dashboard-84b68f675b-z674w -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-694015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (268ns)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-694015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015
E0925 11:51:27.536507   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25: (1.030380308s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p newest-cni-372603                                   | newest-cni-372603            | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-785493 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
	|         | disable-driver-mounts-785493                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-094323            | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-094323                 | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-863905 sudo                              | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| delete  | -p no-preload-863905                                   | no-preload-863905            | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
	|         | default-k8s-diff-port-319133                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-094323 sudo                             | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	| delete  | -p embed-certs-094323                                  | embed-certs-094323           | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 11:28:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:28:19.035134   59899 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:28:19.035380   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035388   59899 out.go:309] Setting ErrFile to fd 2...
	I0925 11:28:19.035392   59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:19.035594   59899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 11:28:19.036084   59899 out.go:303] Setting JSON to false
	I0925 11:28:19.037024   59899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4250,"bootTime":1695637049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:28:19.037076   59899 start.go:138] virtualization: kvm guest
	I0925 11:28:19.039385   59899 out.go:177] * [embed-certs-094323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:28:19.041106   59899 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:28:19.041220   59899 notify.go:220] Checking for updates...
	I0925 11:28:19.042531   59899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:28:19.043924   59899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:19.045264   59899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 11:28:19.046665   59899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:28:19.047943   59899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:28:19.049713   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:19.050284   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.050336   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.066768   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0925 11:28:19.067166   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.067840   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.067866   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.068328   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.068548   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.069227   59899 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:28:19.070747   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.070796   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.084889   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0925 11:28:19.085259   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.085647   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.085666   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.085966   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.086156   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.120695   59899 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 11:28:19.122195   59899 start.go:298] selected driver: kvm2
	I0925 11:28:19.122213   59899 start.go:902] validating driver "kvm2" against &{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.122331   59899 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:28:19.122990   59899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.123070   59899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 11:28:19.137559   59899 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0925 11:28:19.137967   59899 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:28:19.138031   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:19.138049   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:19.138061   59899 start_flags.go:321] config:
	{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:19.138243   59899 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:19.139914   59899 out.go:177] * Starting control plane node embed-certs-094323 in cluster embed-certs-094323
	I0925 11:28:19.141213   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:19.141251   59899 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 11:28:19.141267   59899 cache.go:57] Caching tarball of preloaded images
	I0925 11:28:19.141342   59899 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 11:28:19.141351   59899 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 11:28:19.141434   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:19.141593   59899 start.go:365] acquiring machines lock for embed-certs-094323: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:28:19.141630   59899 start.go:369] acquired machines lock for "embed-certs-094323" in 22.488µs
	I0925 11:28:19.141643   59899 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:28:19.141651   59899 fix.go:54] fixHost starting: 
	I0925 11:28:19.141918   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:28:19.141948   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:28:19.155211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0925 11:28:19.155620   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:28:19.156032   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:28:19.156055   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:28:19.156384   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:28:19.156590   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:19.156767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:28:19.158188   59899 fix.go:102] recreateIfNeeded on embed-certs-094323: state=Stopped err=<nil>
	I0925 11:28:19.158223   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	W0925 11:28:19.158395   59899 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:28:19.160159   59899 out.go:177] * Restarting existing kvm2 VM for "embed-certs-094323" ...
	I0925 11:28:15.403806   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:17.404448   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.405067   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:15.674829   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:18.175095   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.492932   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.991315   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:19.161340   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Start
	I0925 11:28:19.161501   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring networks are active...
	I0925 11:28:19.162257   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network default is active
	I0925 11:28:19.162588   59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network mk-embed-certs-094323 is active
	I0925 11:28:19.163048   59899 main.go:141] libmachine: (embed-certs-094323) Getting domain xml...
	I0925 11:28:19.163763   59899 main.go:141] libmachine: (embed-certs-094323) Creating domain...
	I0925 11:28:20.442361   59899 main.go:141] libmachine: (embed-certs-094323) Waiting to get IP...
	I0925 11:28:20.443271   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.443734   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.443823   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.443734   59935 retry.go:31] will retry after 267.692283ms: waiting for machine to come up
	I0925 11:28:20.713388   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.713952   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.713983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.713901   59935 retry.go:31] will retry after 277.980932ms: waiting for machine to come up
	I0925 11:28:20.993556   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:20.994198   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:20.994234   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.994172   59935 retry.go:31] will retry after 459.010271ms: waiting for machine to come up
	I0925 11:28:21.454879   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.455430   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.455461   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.455383   59935 retry.go:31] will retry after 366.809435ms: waiting for machine to come up
	I0925 11:28:21.824207   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:21.824773   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:21.824806   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.824720   59935 retry.go:31] will retry after 488.071541ms: waiting for machine to come up
	I0925 11:28:22.314305   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:22.314790   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:22.314818   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:22.314762   59935 retry.go:31] will retry after 945.003407ms: waiting for machine to come up
	I0925 11:28:23.261899   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:23.262367   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:23.262409   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:23.262317   59935 retry.go:31] will retry after 1.092936458s: waiting for machine to come up
	I0925 11:28:21.407022   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:23.905338   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:20.674171   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:22.674573   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:25.174611   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.991430   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.491751   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:24.357394   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:24.358014   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:24.358072   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:24.357975   59935 retry.go:31] will retry after 1.364274695s: waiting for machine to come up
	I0925 11:28:25.723341   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:25.723819   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:25.723848   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:25.723762   59935 retry.go:31] will retry after 1.588423993s: waiting for machine to come up
	I0925 11:28:27.313769   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:27.314265   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:27.314299   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:27.314211   59935 retry.go:31] will retry after 1.537433598s: waiting for machine to come up
	I0925 11:28:28.853890   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:28.854449   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:28.854472   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:28.854378   59935 retry.go:31] will retry after 2.010519573s: waiting for machine to come up
	I0925 11:28:26.405198   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:28.409892   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:27.673983   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.675459   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:29.492466   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:31.493901   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:30.867498   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:30.868057   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:30.868084   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:30.868021   59935 retry.go:31] will retry after 2.230830763s: waiting for machine to come up
	I0925 11:28:33.100983   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:33.101572   59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
	I0925 11:28:33.101612   59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:33.101515   59935 retry.go:31] will retry after 4.360204715s: waiting for machine to come up
	I0925 11:28:30.903969   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.905907   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:32.173159   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:34.672934   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:33.990422   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:35.990706   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.992428   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.463184   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463720   59899 main.go:141] libmachine: (embed-certs-094323) Found IP for machine: 192.168.39.111
	I0925 11:28:37.463748   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has current primary IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.463757   59899 main.go:141] libmachine: (embed-certs-094323) Reserving static IP address...
	I0925 11:28:37.464174   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.464215   59899 main.go:141] libmachine: (embed-certs-094323) DBG | skip adding static IP to network mk-embed-certs-094323 - found existing host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"}
	I0925 11:28:37.464230   59899 main.go:141] libmachine: (embed-certs-094323) Reserved static IP address: 192.168.39.111
	I0925 11:28:37.464248   59899 main.go:141] libmachine: (embed-certs-094323) Waiting for SSH to be available...
	I0925 11:28:37.464264   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Getting to WaitForSSH function...
	I0925 11:28:37.466402   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.466816   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.466843   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.467015   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH client type: external
	I0925 11:28:37.467053   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa (-rw-------)
	I0925 11:28:37.467087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0925 11:28:37.467100   59899 main.go:141] libmachine: (embed-certs-094323) DBG | About to run SSH command:
	I0925 11:28:37.467136   59899 main.go:141] libmachine: (embed-certs-094323) DBG | exit 0
	I0925 11:28:37.556399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | SSH cmd err, output: <nil>: 
	I0925 11:28:37.556778   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetConfigRaw
	I0925 11:28:37.557414   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.560030   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560395   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.560428   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.560640   59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
	I0925 11:28:37.560845   59899 machine.go:88] provisioning docker machine ...
	I0925 11:28:37.560864   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:37.561073   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561221   59899 buildroot.go:166] provisioning hostname "embed-certs-094323"
	I0925 11:28:37.561235   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.561420   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.563597   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.563895   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.563925   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.564030   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.564225   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564405   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.564531   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.564705   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.565158   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.565180   59899 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-094323 && echo "embed-certs-094323" | sudo tee /etc/hostname
	I0925 11:28:37.695364   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-094323
	
	I0925 11:28:37.695398   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.698664   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699091   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.699124   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.699344   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.699550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699717   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.699901   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.700108   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:37.700483   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:37.700503   59899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-094323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-094323/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-094323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:28:37.824658   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:28:37.824711   59899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
	I0925 11:28:37.824734   59899 buildroot.go:174] setting up certificates
	I0925 11:28:37.824745   59899 provision.go:83] configureAuth start
	I0925 11:28:37.824759   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
	I0925 11:28:37.825074   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:37.827695   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828087   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.828131   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.828262   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.830526   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830866   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.830897   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.830986   59899 provision.go:138] copyHostCerts
	I0925 11:28:37.831038   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
	I0925 11:28:37.831050   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
	I0925 11:28:37.831116   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
	I0925 11:28:37.831199   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
	I0925 11:28:37.831208   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
	I0925 11:28:37.831231   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
	I0925 11:28:37.831315   59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
	I0925 11:28:37.831322   59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
	I0925 11:28:37.831343   59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
	I0925 11:28:37.831388   59899 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.embed-certs-094323 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube embed-certs-094323]
	I0925 11:28:37.908612   59899 provision.go:172] copyRemoteCerts
	I0925 11:28:37.908700   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:28:37.908735   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:37.911729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912109   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:37.912140   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:37.912334   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:37.912534   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:37.912716   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:37.912845   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:37.998547   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:28:38.026509   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0925 11:28:38.050201   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:28:38.074649   59899 provision.go:86] duration metric: configureAuth took 249.890915ms
	I0925 11:28:38.074676   59899 buildroot.go:189] setting minikube options for container-runtime
	I0925 11:28:38.074944   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:28:38.074975   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:38.075242   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.078170   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078528   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.078567   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.078795   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.078989   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079174   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.079356   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.079539   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.079964   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.079984   59899 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 11:28:38.198741   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 11:28:38.198765   59899 buildroot.go:70] root file system type: tmpfs
	I0925 11:28:38.198890   59899 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 11:28:38.198915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.201807   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202182   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.202213   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.202351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.202547   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202711   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.202847   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.202992   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.203346   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.203422   59899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 11:28:38.330031   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 11:28:38.330061   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:38.333195   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333537   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:38.333568   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:38.333754   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:38.333924   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334109   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:38.334259   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:38.334428   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:38.334869   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:38.334898   59899 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 11:28:35.403941   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:37.405325   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:36.673537   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:38.675023   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:39.250696   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 11:28:39.250732   59899 machine.go:91] provisioned docker machine in 1.689868908s
	I0925 11:28:39.250752   59899 start.go:300] post-start starting for "embed-certs-094323" (driver="kvm2")
	I0925 11:28:39.250766   59899 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:28:39.250786   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.251224   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:28:39.251260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.254399   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.254904   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.254937   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.255093   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.255261   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.255432   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.255612   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.350663   59899 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:28:39.357361   59899 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 11:28:39.357388   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
	I0925 11:28:39.357464   59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
	I0925 11:28:39.357582   59899 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
	I0925 11:28:39.357712   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:28:39.374752   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:39.407365   59899 start.go:303] post-start completed in 156.599445ms
	I0925 11:28:39.407390   59899 fix.go:56] fixHost completed within 20.265737349s
	I0925 11:28:39.407412   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.409869   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410204   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.410246   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.410351   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.410526   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410672   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.410817   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.411004   59899 main.go:141] libmachine: Using SSH client type: native
	I0925 11:28:39.411443   59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0925 11:28:39.411457   59899 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 11:28:39.525878   59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641319.473578694
	
	I0925 11:28:39.525906   59899 fix.go:206] guest clock: 1695641319.473578694
	I0925 11:28:39.525916   59899 fix.go:219] Guest: 2023-09-25 11:28:39.473578694 +0000 UTC Remote: 2023-09-25 11:28:39.407394176 +0000 UTC m=+20.400726255 (delta=66.184518ms)
	I0925 11:28:39.525941   59899 fix.go:190] guest clock delta is within tolerance: 66.184518ms
	I0925 11:28:39.525949   59899 start.go:83] releasing machines lock for "embed-certs-094323", held for 20.384309776s
	I0925 11:28:39.525980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.526255   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:39.528977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.529375   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.529553   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530157   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:28:39.530430   59899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:28:39.530480   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.530741   59899 ssh_runner.go:195] Run: cat /version.json
	I0925 11:28:39.530766   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:28:39.533347   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533598   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.533796   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.533834   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534008   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:39.534017   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534033   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:39.534116   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:28:39.534328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534397   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:28:39.534497   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534546   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:28:39.534701   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.534716   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:28:39.619280   59899 ssh_runner.go:195] Run: systemctl --version
	I0925 11:28:39.651081   59899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 11:28:39.656908   59899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 11:28:39.656977   59899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:28:39.674233   59899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:28:39.674259   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.674415   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:39.693891   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 11:28:39.704196   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 11:28:39.714537   59899 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 11:28:39.714587   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 11:28:39.724833   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.734476   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 11:28:39.744763   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:28:39.755865   59899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 11:28:39.765565   59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 11:28:39.775652   59899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 11:28:39.785628   59899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 11:28:39.794828   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:39.915710   59899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 11:28:39.933084   59899 start.go:469] detecting cgroup driver to use...
	I0925 11:28:39.933164   59899 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 11:28:39.949304   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.963709   59899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:28:39.980784   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:28:39.994887   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.007408   59899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 11:28:40.034805   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:28:40.047786   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:28:40.066171   59899 ssh_runner.go:195] Run: which cri-dockerd
	I0925 11:28:40.070494   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 11:28:40.078000   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 11:28:40.093462   59899 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 11:28:40.197902   59899 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 11:28:40.313798   59899 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 11:28:40.313947   59899 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 11:28:40.330472   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:40.443989   59899 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:28:41.943902   59899 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49987353s)
	I0925 11:28:41.943995   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.063894   59899 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 11:28:42.177577   59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:28:42.291042   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.407796   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 11:28:42.429673   59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:28:42.553611   59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 11:28:42.637258   59899 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 11:28:42.637336   59899 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 11:28:42.643315   59899 start.go:537] Will wait 60s for crictl version
	I0925 11:28:42.643380   59899 ssh_runner.go:195] Run: which crictl
	I0925 11:28:42.647521   59899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 11:28:42.709061   59899 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 11:28:42.709123   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:42.735005   59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:28:39.992653   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.493405   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:42.763193   59899 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 11:28:42.763239   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
	I0925 11:28:42.766116   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766453   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:28:42.766487   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:28:42.766740   59899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0925 11:28:42.770645   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.782793   59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 11:28:42.782837   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.805110   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.805135   59899 docker.go:594] Images already preloaded, skipping extraction
	I0925 11:28:42.805190   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:28:42.824840   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:28:42.824876   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:28:42.824941   59899 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 11:28:42.858255   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:28:42.858285   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:28:42.858303   59899 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 11:28:42.858319   59899 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-094323 NodeName:embed-certs-094323 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 11:28:42.858443   59899 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-094323"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 11:28:42.858508   59899 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-094323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 11:28:42.858563   59899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 11:28:42.868791   59899 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 11:28:42.868861   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 11:28:42.878094   59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0925 11:28:42.894185   59899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 11:28:42.910390   59899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0925 11:28:42.929194   59899 ssh_runner.go:195] Run: grep 192.168.39.111	control-plane.minikube.internal$ /etc/hosts
	I0925 11:28:42.933290   59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:28:42.946061   59899 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323 for IP: 192.168.39.111
	I0925 11:28:42.946095   59899 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:42.946253   59899 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
	I0925 11:28:42.946292   59899 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
	I0925 11:28:42.946354   59899 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/client.key
	I0925 11:28:42.946414   59899 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key.f4aa454f
	I0925 11:28:42.946448   59899 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key
	I0925 11:28:42.946581   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
	W0925 11:28:42.946628   59899 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
	I0925 11:28:42.946648   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 11:28:42.946675   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
	I0925 11:28:42.946706   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
	I0925 11:28:42.946743   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
	I0925 11:28:42.946793   59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
	I0925 11:28:42.947417   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 11:28:42.970517   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 11:28:42.995598   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 11:28:43.019025   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 11:28:43.044246   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 11:28:43.068806   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 11:28:43.093317   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 11:28:43.117196   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 11:28:43.140309   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
	I0925 11:28:43.164129   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
	I0925 11:28:43.187747   59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 11:28:43.211759   59899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 11:28:43.229751   59899 ssh_runner.go:195] Run: openssl version
	I0925 11:28:43.235370   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
	I0925 11:28:43.244462   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249084   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.249131   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
	I0925 11:28:43.254522   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
	I0925 11:28:43.263996   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
	I0925 11:28:43.273424   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278155   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.278194   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
	I0925 11:28:43.283762   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 11:28:43.293817   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 11:28:43.303828   59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309173   59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.309215   59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:28:43.315555   59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 11:28:43.325092   59899 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 11:28:43.329555   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 11:28:43.335420   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 11:28:43.341663   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 11:28:43.347218   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 11:28:43.352934   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 11:28:43.359116   59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 11:28:43.364415   59899 kubeadm.go:404] StartCluster: {Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 11:28:43.364539   59899 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:43.383931   59899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 11:28:43.393096   59899 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 11:28:43.393114   59899 kubeadm.go:636] restartCluster start
	I0925 11:28:43.393149   59899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 11:28:43.402414   59899 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.403165   59899 kubeconfig.go:135] verify returned: extract IP: "embed-certs-094323" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:28:43.403590   59899 kubeconfig.go:146] "embed-certs-094323" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
	I0925 11:28:43.404176   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:28:43.405944   59899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 11:28:43.413960   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.414004   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.424035   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.424049   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.424076   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.435299   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:43.935935   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:43.936031   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:43.947516   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:39.905311   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.908598   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.404783   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:41.172736   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:43.174138   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:45.174205   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.990934   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:46.991805   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:44.435537   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.435624   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.447609   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:44.936220   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:44.936386   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:44.948140   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.435733   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.435829   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.448013   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:45.935443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:45.935535   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:45.947333   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.435451   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.435515   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.447174   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.935705   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:46.935782   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:46.947562   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.436134   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.436202   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.447762   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:47.936080   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:47.936141   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:47.947832   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.435362   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.435430   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.446887   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:48.935379   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:48.935477   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:48.948793   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:46.904475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:48.905486   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:47.176223   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.674353   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.491562   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:51.492069   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.492471   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:49.436282   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.436396   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.447719   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:49.936050   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:49.936137   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:49.948346   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.435443   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.435524   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.446725   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:50.936401   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:50.936479   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:50.948716   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.436316   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.436391   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.447984   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:51.936106   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:51.936183   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:51.951846   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.435363   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.435459   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.447499   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:52.936093   59899 api_server.go:166] Checking apiserver status ...
	I0925 11:28:52.936170   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 11:28:52.948743   59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 11:28:53.414466   59899 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0925 11:28:53.414503   59899 kubeadm.go:1128] stopping kube-system containers ...
	I0925 11:28:53.414561   59899 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:28:53.436706   59899 docker.go:463] Stopping containers: [5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3]
	I0925 11:28:53.436785   59899 ssh_runner.go:195] Run: docker stop 5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3
	I0925 11:28:53.460993   59899 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 11:28:53.476266   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:28:53.485682   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:28:53.485753   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495238   59899 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 11:28:53.495259   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:53.625292   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:51.404218   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:53.404644   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:52.173594   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.173762   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:55.992677   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.491954   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:54.299318   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.496012   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.595147   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:28:54.679425   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:28:54.679506   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:54.698114   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.211538   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:55.711672   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.211025   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.711636   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:28:56.734459   59899 api_server.go:72] duration metric: took 2.055031465s to wait for apiserver process to appear ...
	I0925 11:28:56.734482   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:28:56.734499   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735092   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:56.735125   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:56.735727   59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I0925 11:28:57.236460   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:28:55.405884   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:57.904799   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:56.673626   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:28:58.673704   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.709537   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.709569   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.709581   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.795585   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 11:29:00.795613   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 11:29:00.795624   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:00.911357   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:00.911393   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.236809   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.242260   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.242286   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:01.735856   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:01.743534   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 11:29:01.743563   59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 11:29:02.236812   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:29:02.247395   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:29:02.257253   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:29:02.257277   59899 api_server.go:131] duration metric: took 5.522789199s to wait for apiserver health ...
	I0925 11:29:02.257286   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:29:02.257297   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:02.258988   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:29:00.496638   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.992616   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.260493   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:29:02.275303   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:29:02.297272   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:29:02.308818   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:29:02.308855   59899 system_pods.go:61] "coredns-5dd5756b68-7kfz5" [9225f684-4ad2-462b-a20b-13dd27aad56f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:29:02.308868   59899 system_pods.go:61] "etcd-embed-certs-094323" [5603d9a0-390a-4cf1-ad8f-a976016d96e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0925 11:29:02.308879   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [eb928fb0-77a3-45c5-81ce-03ffcb288548] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0925 11:29:02.308889   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8ee4e42e-367a-4be8-9787-c6eb13913d8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0925 11:29:02.308900   59899 system_pods.go:61] "kube-proxy-5k6vp" [b5a3fb6d-bc10-4cde-a1f1-8c57a1fa480b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:29:02.308911   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [4e15edd2-b5f1-4441-b940-2055f20354d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0925 11:29:02.308926   59899 system_pods.go:61] "metrics-server-57f55c9bc5-xcns4" [32a1d71d-7f4d-466a-b745-d2fdf6a88570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:29:02.308942   59899 system_pods.go:61] "storage-provisioner" [91ac60cc-4154-4e62-aa3e-6c492764d7f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:29:02.308955   59899 system_pods.go:74] duration metric: took 11.663759ms to wait for pod list to return data ...
	I0925 11:29:02.308969   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:29:02.315279   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:29:02.315316   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:29:02.315329   59899 node_conditions.go:105] duration metric: took 6.35463ms to run NodePressure ...
	I0925 11:29:02.315351   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 11:29:02.598238   59899 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603645   59899 kubeadm.go:787] kubelet initialised
	I0925 11:29:02.603673   59899 kubeadm.go:788] duration metric: took 5.409805ms waiting for restarted kubelet to initialise ...
	I0925 11:29:02.603682   59899 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:02.609652   59899 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.616919   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616945   59899 pod_ready.go:81] duration metric: took 7.267055ms waiting for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.616957   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.616966   59899 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.626927   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626952   59899 pod_ready.go:81] duration metric: took 9.977984ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.626964   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.626975   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.635040   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635057   59899 pod_ready.go:81] duration metric: took 8.069751ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.635065   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.635071   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:02.701570   59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701594   59899 pod_ready.go:81] duration metric: took 66.51566ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:02.701604   59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
	I0925 11:29:02.701614   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:00.404282   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.407062   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:00.674496   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:02.676016   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.677117   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:05.005683   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.491820   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.513619   59899 pod_ready.go:92] pod "kube-proxy-5k6vp" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:04.513641   59899 pod_ready.go:81] duration metric: took 1.812019136s waiting for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:04.513650   59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:06.610704   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:08.610973   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:04.905976   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.404291   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.408011   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:07.173790   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.673547   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:09.492854   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.991906   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.110562   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:13.112908   59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:11.905538   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.404450   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:12.173257   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.673817   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.492243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.991655   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:14.610905   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:29:14.610923   59899 pod_ready.go:81] duration metric: took 10.097268131s waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:14.610932   59899 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	I0925 11:29:16.629749   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:16.412718   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.906798   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:17.173554   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.674607   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:18.992367   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.491588   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:19.130001   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.629643   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:21.403543   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.405654   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:22.173742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.674422   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:23.992075   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.491409   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.492221   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:24.129530   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:26.629049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.629817   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:25.909201   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:28.403475   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:27.174742   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:29.673522   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.990733   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.492080   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.128865   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:33.129900   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:30.405115   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:32.904179   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:31.674133   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.173962   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.990697   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.991964   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:35.629757   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.630073   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:34.905517   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:37.405590   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:36.175249   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:38.674512   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.490747   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.991730   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:40.129932   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:42.628523   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:39.904204   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.905925   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.406994   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:41.172242   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:43.173423   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:45.174163   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.992082   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.491243   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:44.629935   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.129139   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:46.904285   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.409716   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:47.174974   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.673662   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.993800   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.491813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:49.130049   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.628211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.629350   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:51.905344   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:53.905370   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:52.173811   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.673161   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:54.493022   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:56.993331   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.629518   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:57.629571   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:55.909272   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.403659   57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:58.407567   57752 pod_ready.go:81] duration metric: took 4m0.000815308s waiting for pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace to be "Ready" ...
	E0925 11:29:58.407592   57752 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:29:58.407601   57752 pod_ready.go:38] duration metric: took 4m6.831828709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:29:58.407622   57752 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:29:58.407686   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:29:58.442532   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:29:58.442627   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:29:58.466398   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:29:58.466474   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:29:58.488629   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:29:58.488710   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:29:58.515985   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:29:58.516069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:29:58.551483   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:29:58.551593   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:29:58.575447   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:29:58.575518   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:29:58.595332   57752 logs.go:284] 0 containers: []
	W0925 11:29:58.595354   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:29:58.595406   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:29:58.616993   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:29:58.617053   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:29:58.641655   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:29:58.641682   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:29:58.641692   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:29:58.697709   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:29:58.697746   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:29:58.720902   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:29:58.720930   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:29:58.812571   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:29:58.812609   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:29:58.833650   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:29:58.833678   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:29:58.888959   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:29:58.888999   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:29:58.924906   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:29:58.924934   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:29:58.951722   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:29:58.951750   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:29:58.975890   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:29:58.975912   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:29:59.042048   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:29:59.042077   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:29:59.090056   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:29:59.090083   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:29:59.118231   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:29:59.118257   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:29:59.141561   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:29:59.141584   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:29:59.168388   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:29:59.168420   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:29:59.202331   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:29:59.202355   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:29:59.278282   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:29:59.278317   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:29:59.431326   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:29:59.431356   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:29:59.462487   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:29:59.462516   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:29:59.506895   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:29:59.506927   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:29:59.551311   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:29:59.551351   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:29:56.674157   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.174193   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:29:59.490645   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:01.491108   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:03.491826   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:00.130429   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.630390   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:02.085037   57752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:02.106600   57752 api_server.go:72] duration metric: took 4m14.069395428s to wait for apiserver process to appear ...
	I0925 11:30:02.106631   57752 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:02.106709   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:02.131534   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:02.131610   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:02.154915   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:02.154979   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:02.178047   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:02.178108   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:02.202658   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:02.202754   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:02.224819   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:02.224908   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:02.246587   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:02.246650   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:02.267013   57752 logs.go:284] 0 containers: []
	W0925 11:30:02.267037   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:02.267090   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:02.286403   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:02.286476   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:02.307111   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:02.307141   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:02.307154   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:02.347993   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:02.348022   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:02.370841   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:02.370875   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:02.396931   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:02.396954   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:02.438996   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:02.439025   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:02.464589   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:02.464621   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:02.492060   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:02.492087   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:02.558928   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:02.558959   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:02.654217   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:02.654246   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:02.669423   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:02.669453   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:02.802934   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:02.802959   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:02.835624   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:02.835649   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:02.866826   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:02.866849   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:02.898744   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:02.898775   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:02.934534   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:02.934567   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:02.972310   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:02.972337   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:03.005474   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:03.005499   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:03.027346   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:03.027368   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:03.099823   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:03.099857   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:03.124682   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:03.124717   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:01.674624   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:04.179180   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.991507   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:08.492917   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.129924   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:07.630929   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.663871   57752 api_server.go:253] Checking apiserver healthz at https://192.168.72.162:8443/healthz ...
	I0925 11:30:05.669416   57752 api_server.go:279] https://192.168.72.162:8443/healthz returned 200:
	ok
	I0925 11:30:05.670783   57752 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:05.670809   57752 api_server.go:131] duration metric: took 3.564170226s to wait for apiserver health ...
	I0925 11:30:05.670819   57752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:05.670872   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:05.693324   57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
	I0925 11:30:05.693399   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:05.717998   57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
	I0925 11:30:05.718069   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:05.742708   57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
	I0925 11:30:05.742793   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:05.764298   57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
	I0925 11:30:05.764374   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:05.785970   57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
	I0925 11:30:05.786039   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:05.806950   57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
	I0925 11:30:05.807037   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:05.826462   57752 logs.go:284] 0 containers: []
	W0925 11:30:05.826487   57752 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:05.826540   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:05.845927   57752 logs.go:284] 1 containers: [146977376d21]
	I0925 11:30:05.845997   57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:05.868573   57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
	I0925 11:30:05.868615   57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
	I0925 11:30:05.868629   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
	I0925 11:30:05.909242   57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
	I0925 11:30:05.909270   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
	I0925 11:30:05.959647   57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
	I0925 11:30:05.959680   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
	I0925 11:30:05.988448   57752 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:05.988480   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:06.067394   57752 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:06.067429   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:06.084943   57752 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:06.084971   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:06.238324   57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
	I0925 11:30:06.238357   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
	I0925 11:30:06.273373   57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
	I0925 11:30:06.273403   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
	I0925 11:30:06.303181   57752 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:06.303211   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:06.365354   57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
	I0925 11:30:06.365398   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
	I0925 11:30:06.391962   57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
	I0925 11:30:06.391989   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
	I0925 11:30:06.415389   57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
	I0925 11:30:06.415412   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
	I0925 11:30:06.441786   57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
	I0925 11:30:06.441809   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
	I0925 11:30:06.479862   57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
	I0925 11:30:06.479892   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
	I0925 11:30:06.507143   57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
	I0925 11:30:06.507186   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
	I0925 11:30:06.546486   57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
	I0925 11:30:06.546514   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
	I0925 11:30:06.591229   57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
	I0925 11:30:06.591258   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
	I0925 11:30:06.616844   57752 logs.go:123] Gathering logs for container status ...
	I0925 11:30:06.616869   57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:06.705576   57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
	I0925 11:30:06.705606   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
	I0925 11:30:06.742505   57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
	I0925 11:30:06.742533   57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
	I0925 11:30:09.274341   57752 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:09.274368   57752 system_pods.go:61] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.274373   57752 system_pods.go:61] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.274378   57752 system_pods.go:61] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.274383   57752 system_pods.go:61] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.274386   57752 system_pods.go:61] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.274390   57752 system_pods.go:61] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.274397   57752 system_pods.go:61] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.274402   57752 system_pods.go:61] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.274408   57752 system_pods.go:74] duration metric: took 3.603584218s to wait for pod list to return data ...
	I0925 11:30:09.274414   57752 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:09.276929   57752 default_sa.go:45] found service account: "default"
	I0925 11:30:09.276948   57752 default_sa.go:55] duration metric: took 2.5282ms for default service account to be created ...
	I0925 11:30:09.276954   57752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:09.282656   57752 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:09.282684   57752 system_pods.go:89] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
	I0925 11:30:09.282690   57752 system_pods.go:89] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
	I0925 11:30:09.282694   57752 system_pods.go:89] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
	I0925 11:30:09.282699   57752 system_pods.go:89] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
	I0925 11:30:09.282702   57752 system_pods.go:89] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
	I0925 11:30:09.282706   57752 system_pods.go:89] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
	I0925 11:30:09.282712   57752 system_pods.go:89] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:09.282721   57752 system_pods.go:89] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
	I0925 11:30:09.282728   57752 system_pods.go:126] duration metric: took 5.769715ms to wait for k8s-apps to be running ...
	I0925 11:30:09.282734   57752 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:09.282774   57752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:09.296447   57752 system_svc.go:56] duration metric: took 13.70254ms WaitForService to wait for kubelet.
	I0925 11:30:09.296472   57752 kubeadm.go:581] duration metric: took 4m21.259281902s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:09.296496   57752 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:09.300312   57752 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:09.300337   57752 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:09.300350   57752 node_conditions.go:105] duration metric: took 3.848191ms to run NodePressure ...
	I0925 11:30:09.300362   57752 start.go:228] waiting for startup goroutines ...
	I0925 11:30:09.300371   57752 start.go:233] waiting for cluster config update ...
	I0925 11:30:09.300384   57752 start.go:242] writing updated cluster config ...
	I0925 11:30:09.300719   57752 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:09.350285   57752 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:09.353257   57752 out.go:177] * Done! kubectl is now configured to use "no-preload-863905" cluster and "default" namespace by default
	I0925 11:30:06.676262   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.174330   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:10.992813   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.490354   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:09.636520   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:12.129471   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:11.175516   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:13.673816   57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.366919   57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:14.366953   57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:14.366991   57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:14.367015   57426 kubeadm.go:640] restartCluster took 5m2.405916758s
	W0925 11:30:14.367083   57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:30:14.367112   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0925 11:30:15.494599   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.993167   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:14.130508   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:16.132437   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:18.631163   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:17.424908   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
	I0925 11:30:17.424975   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:17.439514   57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:30:17.449686   57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:30:17.460096   57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:30:17.460147   57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0925 11:30:17.622252   57426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0925 11:30:17.662261   57426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I0925 11:30:17.759764   57426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:30:20.493076   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:22.995449   57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:21.130370   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:23.137540   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:24.792048   57927 pod_ready.go:81] duration metric: took 4m0.000079144s waiting for pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace to be "Ready" ...
	E0925 11:30:24.792097   57927 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:30:24.792110   57927 pod_ready.go:38] duration metric: took 4m9.506946432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:24.792141   57927 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:30:24.792215   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:24.824238   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:24.824320   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:24.843686   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:24.843764   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:24.868292   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:24.868377   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:24.892540   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:24.892617   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:24.919019   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:24.919091   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:24.946855   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:24.946930   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:24.989142   57927 logs.go:284] 0 containers: []
	W0925 11:30:24.989168   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:24.989220   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:25.011261   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:25.011345   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:25.030950   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:25.030977   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:25.030989   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:25.120210   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:25.120239   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:25.152215   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:25.152243   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:25.194959   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:25.194997   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:25.229067   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:25.229094   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:25.256359   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:25.256386   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:25.280428   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:25.280459   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:25.330876   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:25.330902   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:25.353121   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:25.353148   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:25.375127   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:25.375154   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:25.402664   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:25.402690   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:25.428214   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:25.428238   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:25.509982   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:25.510015   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:25.525584   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:25.525623   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:25.696377   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:25.696402   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:25.734242   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:25.734271   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:25.763410   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:25.763436   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:25.797529   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:25.797556   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:25.843899   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:25.843927   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:25.896478   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:25.896507   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:28.465765   57927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:28.480996   57927 api_server.go:72] duration metric: took 4m15.769590927s to wait for apiserver process to appear ...
	I0925 11:30:28.481023   57927 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:28.481101   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:25.631323   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:28.129055   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.749642   57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0925 11:30:30.749742   57426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:30:30.749858   57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:30:30.749944   57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:30:30.750021   57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:30:30.750109   57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:30:30.750191   57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:30:30.750247   57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0925 11:30:30.750371   57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:30:30.751913   57426 out.go:204]   - Generating certificates and keys ...
	I0925 11:30:30.752003   57426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:30:30.752119   57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:30:30.752232   57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:30:30.752318   57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:30:30.752414   57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:30:30.752468   57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:30:30.752570   57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:30:30.752681   57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:30:30.752781   57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:30:30.752890   57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:30:30.752940   57426 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:30:30.753020   57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:30:30.753090   57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:30:30.753154   57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:30:30.753251   57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:30:30.753324   57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:30:30.753398   57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:30:30.755107   57426 out.go:204]   - Booting up control plane ...
	I0925 11:30:30.755208   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:30:30.755334   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:30:30.755426   57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:30:30.755500   57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:30:30.755652   57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:30:30.755754   57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
	I0925 11:30:30.755912   57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:30:30.756083   57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:30:30.756182   57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:30:30.756384   57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 11:30:30.756471   57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
	I0925 11:30:28.509506   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:28.509575   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:28.532621   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:28.532723   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:28.554799   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:28.554878   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:28.574977   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:28.575048   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:28.596014   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:28.596094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:28.616627   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:28.616712   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:28.636762   57927 logs.go:284] 0 containers: []
	W0925 11:30:28.636782   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:28.636838   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:28.659028   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:28.659094   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:28.680689   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:28.680722   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:28.680736   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:28.714051   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:28.714078   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:28.762170   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:28.762204   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:28.788343   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:28.788371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:28.869517   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:28.869548   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:29.002897   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:29.002920   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:29.032416   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:29.032444   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:29.063893   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:29.063921   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:29.089890   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:29.089916   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:29.132797   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:29.132827   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:29.155350   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:29.155371   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:29.213418   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:29.213447   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:29.254863   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:29.254891   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:29.277677   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:29.277709   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:29.308393   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:29.308422   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:29.330968   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:29.330989   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:29.374515   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:29.374542   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:29.399946   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:29.399975   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:29.445837   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:29.445870   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:29.468320   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:29.468346   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.042767   57927 api_server.go:253] Checking apiserver healthz at https://192.168.61.208:8444/healthz ...
	I0925 11:30:32.048546   57927 api_server.go:279] https://192.168.61.208:8444/healthz returned 200:
	ok
	I0925 11:30:32.052014   57927 api_server.go:141] control plane version: v1.28.2
	I0925 11:30:32.052036   57927 api_server.go:131] duration metric: took 3.571006059s to wait for apiserver health ...
	I0925 11:30:32.052046   57927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:32.052108   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:30:32.083762   57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
	I0925 11:30:32.083848   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:30:32.106317   57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
	I0925 11:30:32.106392   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:30:32.128245   57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
	I0925 11:30:32.128333   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:30:32.148973   57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
	I0925 11:30:32.149052   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:30:32.174028   57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
	I0925 11:30:32.174103   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:30:32.196115   57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
	I0925 11:30:32.196181   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:30:32.216678   57927 logs.go:284] 0 containers: []
	W0925 11:30:32.216702   57927 logs.go:286] No container was found matching "kindnet"
	I0925 11:30:32.216757   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:30:32.237388   57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
	I0925 11:30:32.237473   57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:30:32.257088   57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
	I0925 11:30:32.257112   57927 logs.go:123] Gathering logs for kubelet ...
	I0925 11:30:32.257122   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 11:30:32.327894   57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
	I0925 11:30:32.327929   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
	I0925 11:30:32.365128   57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
	I0925 11:30:32.365156   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
	I0925 11:30:32.394664   57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
	I0925 11:30:32.394703   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
	I0925 11:30:32.450709   57927 logs.go:123] Gathering logs for Docker ...
	I0925 11:30:32.450737   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:30:32.512407   57927 logs.go:123] Gathering logs for container status ...
	I0925 11:30:32.512442   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:30:32.602958   57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
	I0925 11:30:32.602985   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
	I0925 11:30:32.646449   57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
	I0925 11:30:32.646478   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
	I0925 11:30:32.693817   57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
	I0925 11:30:32.693843   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
	I0925 11:30:32.728336   57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
	I0925 11:30:32.728364   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
	I0925 11:30:32.754018   57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
	I0925 11:30:32.754053   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
	I0925 11:30:32.791438   57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
	I0925 11:30:32.791473   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
	I0925 11:30:32.813473   57927 logs.go:123] Gathering logs for dmesg ...
	I0925 11:30:32.813501   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:30:32.827795   57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
	I0925 11:30:32.827824   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
	I0925 11:30:32.862910   57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
	I0925 11:30:32.862934   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
	I0925 11:30:32.899610   57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
	I0925 11:30:32.899642   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
	I0925 11:30:32.941021   57927 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:30:32.941056   57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:30:33.072749   57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
	I0925 11:30:33.072786   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
	I0925 11:30:33.105984   57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
	I0925 11:30:33.106016   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
	I0925 11:30:33.132338   57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
	I0925 11:30:33.132366   57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
	I0925 11:30:30.629720   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:33.133383   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:30.758173   57426 out.go:204]   - Configuring RBAC rules ...
	I0925 11:30:30.758310   57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:30:30.758487   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:30:30.758649   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:30:30.758810   57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:30:30.758962   57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:30:30.759033   57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:30:30.759112   57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:30:30.759121   57426 kubeadm.go:322] 
	I0925 11:30:30.759191   57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:30:30.759205   57426 kubeadm.go:322] 
	I0925 11:30:30.759275   57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:30:30.759285   57426 kubeadm.go:322] 
	I0925 11:30:30.759329   57426 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:30:30.759379   57426 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:30:30.759421   57426 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:30:30.759429   57426 kubeadm.go:322] 
	I0925 11:30:30.759483   57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:30:30.759595   57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:30:30.759689   57426 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:30:30.759705   57426 kubeadm.go:322] 
	I0925 11:30:30.759821   57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0925 11:30:30.759962   57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:30:30.759977   57426 kubeadm.go:322] 
	I0925 11:30:30.760084   57426 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760216   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:30:30.760255   57426 kubeadm.go:322]     --control-plane 	  
	I0925 11:30:30.760264   57426 kubeadm.go:322] 
	I0925 11:30:30.760361   57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:30:30.760370   57426 kubeadm.go:322] 
	I0925 11:30:30.760469   57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
	I0925 11:30:30.760617   57426 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:30:30.760630   57426 cni.go:84] Creating CNI manager for ""
	I0925 11:30:30.760655   57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:30:30.760693   57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:30:30.760827   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.760899   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:30.820984   57426 ops.go:34] apiserver oom_adj: -16
	I0925 11:30:31.034555   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.165894   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:31.768765   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.269393   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:32.768687   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.269126   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:33.768794   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.269149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:34.769469   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.268685   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:35.664427   57927 system_pods.go:59] 8 kube-system pods found
	I0925 11:30:35.664451   57927 system_pods.go:61] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.664456   57927 system_pods.go:61] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.664461   57927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.664466   57927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.664473   57927 system_pods.go:61] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.664479   57927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.664489   57927 system_pods.go:61] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.664507   57927 system_pods.go:61] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.664518   57927 system_pods.go:74] duration metric: took 3.612465435s to wait for pod list to return data ...
	I0925 11:30:35.664526   57927 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:35.669238   57927 default_sa.go:45] found service account: "default"
	I0925 11:30:35.669258   57927 default_sa.go:55] duration metric: took 4.728219ms for default service account to be created ...
	I0925 11:30:35.669266   57927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:35.677224   57927 system_pods.go:86] 8 kube-system pods found
	I0925 11:30:35.677248   57927 system_pods.go:89] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
	I0925 11:30:35.677254   57927 system_pods.go:89] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
	I0925 11:30:35.677260   57927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
	I0925 11:30:35.677265   57927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
	I0925 11:30:35.677269   57927 system_pods.go:89] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
	I0925 11:30:35.677273   57927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
	I0925 11:30:35.677279   57927 system_pods.go:89] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:35.677285   57927 system_pods.go:89] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
	I0925 11:30:35.677291   57927 system_pods.go:126] duration metric: took 8.021227ms to wait for k8s-apps to be running ...
	I0925 11:30:35.677301   57927 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:35.677340   57927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:35.696637   57927 system_svc.go:56] duration metric: took 19.327902ms WaitForService to wait for kubelet.
	I0925 11:30:35.696659   57927 kubeadm.go:581] duration metric: took 4m22.985262397s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:30:35.696712   57927 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:35.701675   57927 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:30:35.701709   57927 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:35.701719   57927 node_conditions.go:105] duration metric: took 4.999654ms to run NodePressure ...
	I0925 11:30:35.701730   57927 start.go:228] waiting for startup goroutines ...
	I0925 11:30:35.701737   57927 start.go:233] waiting for cluster config update ...
	I0925 11:30:35.701749   57927 start.go:242] writing updated cluster config ...
	I0925 11:30:35.702076   57927 ssh_runner.go:195] Run: rm -f paused
	I0925 11:30:35.751111   57927 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:30:35.753033   57927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-319133" cluster and "default" namespace by default
	I0925 11:30:35.134183   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:37.629084   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:35.769384   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.269510   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:36.768848   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.268799   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:37.769259   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.269444   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:38.769081   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.269471   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.768795   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:40.269215   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:39.631655   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:42.128083   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:40.768992   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.269161   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:41.768782   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.269438   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:42.769149   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.268490   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:43.768911   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.269363   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:44.769428   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.268548   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:45.769489   57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:46.046613   57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
	I0925 11:30:46.046655   57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
	I0925 11:30:46.046676   57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.046764   57426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:30:46.048206   57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:46.048432   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:30:46.048574   57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:30:46.048644   57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048653   57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048678   57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
	I0925 11:30:46.048687   57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
	W0925 11:30:46.048690   57426 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:30:46.048698   57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
	W0925 11:30:46.048709   57426 addons.go:240] addon dashboard should already be in state true
	I0925 11:30:46.048720   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.048735   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048744   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048818   57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
	I0925 11:30:46.048847   57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
	W0925 11:30:46.048855   57426 addons.go:240] addon metrics-server should already be in state true
	I0925 11:30:46.048680   57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
	I0925 11:30:46.048796   57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:30:46.048888   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.048935   57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:30:46.048944   57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
	I0925 11:30:46.048955   57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:30:46.048963   57426 cache.go:87] Successfully saved all images to host disk.
	I0925 11:30:46.049135   57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 11:30:46.049144   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049162   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049168   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049183   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049247   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049260   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049320   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049333   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.049505   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.049555   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.072180   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0925 11:30:46.072238   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0925 11:30:46.072269   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0925 11:30:46.072356   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0925 11:30:46.072357   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0925 11:30:46.072696   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072776   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.072860   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073344   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073364   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073496   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.073509   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073756   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.073762   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.073964   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074195   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074210   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074253   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.074286   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.074439   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074467   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074610   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.074656   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.074686   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074715   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.074930   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.075069   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.075101   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.075234   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.075269   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.075582   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.075811   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.077659   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.077697   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.094611   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0925 11:30:46.097022   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0925 11:30:46.097145   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097460   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.097748   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.097767   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098172   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.098314   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.098333   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.098564   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.098618   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.099229   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.101256   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.103863   57426 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:30:46.102124   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.102436   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0925 11:30:46.106576   57426 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:30:46.105560   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.109500   57426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:30:46.108220   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:30:46.108845   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.110913   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.110969   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:30:46.110985   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.110999   57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.111011   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:30:46.111024   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.112450   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.112637   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.112839   57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:30:46.112862   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.115509   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.115949   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.115983   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116123   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116214   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116253   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.116342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.116466   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.116484   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.116508   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.116774   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.116925   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.117104   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.117252   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.119073   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119413   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.119430   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.119685   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.119854   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.120011   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.120148   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.127174   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0925 11:30:46.127843   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.128399   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.128428   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.128967   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.129155   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.129945   57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
	I0925 11:30:46.129977   57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:30:46.131741   57426 out.go:177] * Verifying Kubernetes components...
	I0925 11:30:46.133087   57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:46.130848   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.134728   57426 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:30:44.129372   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.133247   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.630362   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:46.136080   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:30:46.136097   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:30:46.136115   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.139231   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139692   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.139718   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.139957   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.140113   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.140252   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.140377   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.147885   57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
	W0925 11:30:46.147907   57426 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:30:46.147934   57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
	I0925 11:30:46.148356   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.148384   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.173474   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0925 11:30:46.174243   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.174879   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.174900   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.176033   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.176694   57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:30:46.176736   57426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:30:46.196631   57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I0925 11:30:46.197107   57426 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:30:46.197645   57426 main.go:141] libmachine: Using API Version  1
	I0925 11:30:46.197665   57426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:30:46.198067   57426 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:30:46.198270   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
	I0925 11:30:46.200093   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
	I0925 11:30:46.200354   57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.200371   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:30:46.200390   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
	I0925 11:30:46.203486   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.203884   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
	I0925 11:30:46.203998   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
	I0925 11:30:46.204172   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
	I0925 11:30:46.204342   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
	I0925 11:30:46.204489   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
	I0925 11:30:46.204636   57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
	I0925 11:30:46.413931   57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.414008   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:30:46.416569   57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
	I0925 11:30:46.416586   57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
	I0925 11:30:46.416594   57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:46.420795   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:46.484507   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:30:46.484532   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:30:46.532417   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:30:46.532443   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:30:46.575299   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:30:46.575317   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:30:46.595994   57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.596018   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:30:46.652448   57426 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:3.1
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0925 11:30:46.652473   57426 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:30:46.652480   57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
	I0925 11:30:46.652483   57426 cache_images.go:263] failed pushing to: 
	I0925 11:30:46.652504   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.652518   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.652957   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:46.652963   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.652991   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.653007   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:46.653020   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:46.653288   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:46.653304   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:46.705521   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:46.707099   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:46.712115   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:30:46.712134   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:30:46.762833   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:46.851711   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:30:46.851753   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:30:47.115165   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:30:47.115193   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:30:47.386363   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:30:47.386386   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:30:47.610468   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:30:47.610490   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:30:47.697559   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:30:47.697578   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:30:47.864150   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:30:47.864169   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:30:47.915917   57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:47.915945   57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:30:48.000793   57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
	I0925 11:30:48.000836   57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0925 11:30:48.085411   57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:30:48.190617   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
	I0925 11:30:48.190677   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.190691   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.191035   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.191056   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.191068   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.191078   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.192850   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.192853   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.192876   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.192885   57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
	I0925 11:30:48.465209   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:48.575177   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
	I0925 11:30:48.575232   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575246   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575181   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
	I0925 11:30:48.575317   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575328   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575540   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575560   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575570   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575579   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575635   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575742   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575772   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575781   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.575789   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.575797   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.575878   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.575903   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.575911   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577345   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577384   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577406   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:48.577435   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:48.577451   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:48.577940   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:48.577944   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:48.577964   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.298546   57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
	I0925 11:30:49.298606   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.298628   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302266   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302272   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302307   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.302321   57426 main.go:141] libmachine: Making call to close driver server
	I0925 11:30:49.302331   57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
	I0925 11:30:49.302655   57426 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:30:49.302695   57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
	I0925 11:30:49.302717   57426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:30:49.304441   57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694015 addons enable metrics-server	
	
	
	I0925 11:30:49.306061   57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0925 11:30:49.307539   57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0925 11:30:50.630959   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.128983   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:50.940378   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:53.436796   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.437380   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:55.131064   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.628873   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:57.449840   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.938237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:59.629445   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.129311   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:02.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.937614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:04.627904   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.629258   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:08.629473   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:06.937878   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:09.437807   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.128681   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:13.129731   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:11.939073   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:14.437620   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:15.628774   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:17.630838   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:16.938666   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:19.437732   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:20.139603   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:22.629587   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:21.938151   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:23.938328   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:25.130178   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:27.628803   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:26.439526   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:28.937508   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:29.631037   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:32.128151   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:30.943648   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:33.437428   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:35.438086   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:34.129227   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:36.129294   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:38.629985   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:37.439039   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:39.442448   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.129913   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.631099   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:41.937237   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:43.939282   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.128833   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.628446   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:46.438561   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:48.938598   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.629674   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:53.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:50.938694   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:52.939141   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.438245   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:55.629903   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:58.128851   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:31:57.937434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.437596   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:00.129216   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.629241   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:02.437909   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.438109   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:04.629284   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:07.128455   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:06.438145   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:08.938681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:09.129543   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.629259   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:11.438436   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:13.438614   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:14.130657   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:16.629579   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:15.938889   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:18.438798   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:19.129812   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:21.630003   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:20.937670   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:22.938056   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.938180   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:24.128380   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.129010   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.630164   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:26.938537   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:28.938993   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:31.127679   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.128750   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:30.939782   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:33.438287   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.438564   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:35.128786   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.129289   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:37.938062   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:40.438394   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:39.129627   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:41.131250   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:43.629234   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:42.439143   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:44.938221   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:45.630527   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.128292   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:46.940247   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:48.940644   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:50.128630   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:52.129574   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:51.437686   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:53.438013   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:55.438473   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:54.629843   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.128814   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:57.939231   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:00.438636   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:32:59.633169   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.129926   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:02.937519   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.937631   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:04.629189   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:06.629835   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:08.629868   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:07.436605   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:09.437297   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.128030   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.128211   59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:11.438337   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:13.939288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:14.611278   59899 pod_ready.go:81] duration metric: took 4m0.000327599s waiting for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:14.611332   59899 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0925 11:33:14.611349   59899 pod_ready.go:38] duration metric: took 4m12.007655968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:14.611376   59899 kubeadm.go:640] restartCluster took 4m31.218254898s
	W0925 11:33:14.611443   59899 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0925 11:33:14.611477   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 11:33:15.940496   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:18.440278   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:23.826236   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.214737742s)
	I0925 11:33:23.826300   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:23.840564   59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:33:23.850760   59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:33:23.860161   59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:33:23.860203   59899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 11:33:20.938819   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:22.939228   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.940142   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:24.111104   59899 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:33:27.440968   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:29.937681   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:33.957801   59899 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 11:33:33.957861   59899 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 11:33:33.957964   59899 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:33:33.958127   59899 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:33:33.958257   59899 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 11:33:33.958352   59899 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:33:33.961247   59899 out.go:204]   - Generating certificates and keys ...
	I0925 11:33:33.961330   59899 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 11:33:33.961381   59899 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 11:33:33.961482   59899 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 11:33:33.961584   59899 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0925 11:33:33.961691   59899 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 11:33:33.961764   59899 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0925 11:33:33.961860   59899 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0925 11:33:33.961946   59899 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0925 11:33:33.962038   59899 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 11:33:33.962141   59899 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 11:33:33.962189   59899 kubeadm.go:322] [certs] Using the existing "sa" key
	I0925 11:33:33.962274   59899 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:33:33.962342   59899 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:33:33.962404   59899 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:33:33.962484   59899 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:33:33.962596   59899 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:33:33.962722   59899 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:33:33.962812   59899 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:33:33.964227   59899 out.go:204]   - Booting up control plane ...
	I0925 11:33:33.964334   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:33:33.964411   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:33:33.964484   59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:33:33.964622   59899 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:33:33.964767   59899 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:33:33.964843   59899 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 11:33:33.964974   59899 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 11:33:33.965033   59899 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004093 seconds
	I0925 11:33:33.965122   59899 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:33:33.965219   59899 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:33:33.965300   59899 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:33:33.965551   59899 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-094323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 11:33:33.965631   59899 kubeadm.go:322] [bootstrap-token] Using token: jxl01o.6st4cg36x4e3zwsq
	I0925 11:33:33.968152   59899 out.go:204]   - Configuring RBAC rules ...
	I0925 11:33:33.968255   59899 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:33:33.968324   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 11:33:33.968463   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:33:33.968579   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:33:33.968719   59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:33:33.968841   59899 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:33:33.968990   59899 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 11:33:33.969057   59899 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 11:33:33.969115   59899 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 11:33:33.969125   59899 kubeadm.go:322] 
	I0925 11:33:33.969197   59899 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 11:33:33.969206   59899 kubeadm.go:322] 
	I0925 11:33:33.969302   59899 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 11:33:33.969309   59899 kubeadm.go:322] 
	I0925 11:33:33.969339   59899 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 11:33:33.969409   59899 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:33:33.969481   59899 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:33:33.969494   59899 kubeadm.go:322] 
	I0925 11:33:33.969577   59899 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 11:33:33.969592   59899 kubeadm.go:322] 
	I0925 11:33:33.969652   59899 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 11:33:33.969661   59899 kubeadm.go:322] 
	I0925 11:33:33.969721   59899 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 11:33:33.969820   59899 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:33:33.969931   59899 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:33:33.969945   59899 kubeadm.go:322] 
	I0925 11:33:33.970020   59899 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 11:33:33.970079   59899 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 11:33:33.970085   59899 kubeadm.go:322] 
	I0925 11:33:33.970149   59899 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970246   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
	I0925 11:33:33.970273   59899 kubeadm.go:322] 	--control-plane 
	I0925 11:33:33.970286   59899 kubeadm.go:322] 
	I0925 11:33:33.970379   59899 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:33:33.970391   59899 kubeadm.go:322] 
	I0925 11:33:33.970473   59899 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
	I0925 11:33:33.970561   59899 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 
	I0925 11:33:33.970570   59899 cni.go:84] Creating CNI manager for ""
	I0925 11:33:33.970583   59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:33:33.973276   59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:33:33.974771   59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:33:33.991169   59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 11:33:34.014483   59899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:33:34.014576   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.014605   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=embed-certs-094323 minikube.k8s.io/updated_at=2023_09_25T11_33_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:31.938903   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.438342   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:34.061656   59899 ops.go:34] apiserver oom_adj: -16
	I0925 11:33:34.486947   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:34.586316   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.181870   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:35.682572   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.182427   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.682439   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.182278   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:37.682264   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.181892   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:38.681964   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:36.938434   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.437659   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:39.181618   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:39.682052   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.181879   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:40.682579   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.182334   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.682270   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.181757   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:42.682314   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.181975   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:43.682310   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:41.438288   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:43.937112   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:44.182254   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:44.682566   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.181651   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.681891   59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:33:45.783591   59899 kubeadm.go:1081] duration metric: took 11.769084129s to wait for elevateKubeSystemPrivileges.
	I0925 11:33:45.783631   59899 kubeadm.go:406] StartCluster complete in 5m2.419220731s
	I0925 11:33:45.783654   59899 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.783749   59899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 11:33:45.785139   59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:33:45.785373   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:33:45.785497   59899 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 11:33:45.785578   59899 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-094323"
	I0925 11:33:45.785591   59899 addons.go:69] Setting default-storageclass=true in profile "embed-certs-094323"
	I0925 11:33:45.785600   59899 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-094323"
	W0925 11:33:45.785608   59899 addons.go:240] addon storage-provisioner should already be in state true
	I0925 11:33:45.785610   59899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-094323"
	I0925 11:33:45.785613   59899 addons.go:69] Setting metrics-server=true in profile "embed-certs-094323"
	I0925 11:33:45.785629   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.785624   59899 addons.go:69] Setting dashboard=true in profile "embed-certs-094323"
	I0925 11:33:45.785641   59899 addons.go:231] Setting addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:45.785649   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	W0925 11:33:45.785652   59899 addons.go:240] addon metrics-server should already be in state true
	I0925 11:33:45.785661   59899 addons.go:231] Setting addon dashboard=true in "embed-certs-094323"
	W0925 11:33:45.785671   59899 addons.go:240] addon dashboard should already be in state true
	I0925 11:33:45.785702   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785726   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.785720   59899 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:33:45.785794   59899 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0925 11:33:45.785804   59899 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 97.126µs
	I0925 11:33:45.785813   59899 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0925 11:33:45.785842   59899 cache.go:87] Successfully saved all images to host disk.
	I0925 11:33:45.786040   59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 11:33:45.786074   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786077   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786103   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786119   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786100   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786148   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786175   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786226   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.786382   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.786458   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.804658   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0925 11:33:45.804729   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0925 11:33:45.804829   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0925 11:33:45.805237   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805268   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.805835   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.805855   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806126   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0925 11:33:45.806245   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.806461   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.806533   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806584   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.806593   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806608   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.806726   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0925 11:33:45.806958   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.806973   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807052   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.807117   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807146   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807158   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807335   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807550   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.807552   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.807628   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.807655   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.807678   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.807709   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808075   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808113   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.808146   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.808643   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.808695   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.809669   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.809713   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.815794   59899 addons.go:231] Setting addon default-storageclass=true in "embed-certs-094323"
	W0925 11:33:45.815817   59899 addons.go:240] addon default-storageclass should already be in state true
	I0925 11:33:45.815845   59899 host.go:66] Checking if "embed-certs-094323" exists ...
	I0925 11:33:45.816191   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.816218   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.818468   59899 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-094323" context rescaled to 1 replicas
	I0925 11:33:45.818498   59899 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:33:45.820484   59899 out.go:177] * Verifying Kubernetes components...
	I0925 11:33:45.821970   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:45.827608   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0925 11:33:45.827764   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0925 11:33:45.828140   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828192   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.828742   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828756   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.828865   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.828875   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.829243   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829291   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.829499   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829508   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.829541   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0925 11:33:45.830368   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.830816   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.830834   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.830898   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0925 11:33:45.831336   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.831343   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.831544   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.831741   59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:33:45.831767   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.831896   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.831910   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.831962   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.832006   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.834683   59899 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 11:33:45.833215   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.835296   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836115   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.836132   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.836140   59899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:33:45.835941   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.837552   59899 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:45.837565   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:33:45.837580   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836081   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:33:45.837626   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:33:45.837640   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.836328   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.837722   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.838263   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.838449   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.840153   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.841675   59899 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 11:33:45.843211   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0925 11:33:45.841916   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842082   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.842734   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.842915   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.843565   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.844615   59899 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 11:33:45.845951   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 11:33:45.845966   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 11:33:45.845980   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.844700   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.844729   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.846027   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844863   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.846043   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.844886   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.845165   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.846085   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.846265   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846317   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.846412   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.846432   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.847139   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.847153   59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 11:33:45.847192   59899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 11:33:45.848989   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849283   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.849314   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.849456   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.849635   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.849777   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.849913   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:45.862447   59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0925 11:33:45.862828   59899 main.go:141] libmachine: () Calling .GetVersion
	I0925 11:33:45.863295   59899 main.go:141] libmachine: Using API Version  1
	I0925 11:33:45.863325   59899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 11:33:45.863706   59899 main.go:141] libmachine: () Calling .GetMachineName
	I0925 11:33:45.863888   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
	I0925 11:33:45.865511   59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
	I0925 11:33:45.865802   59899 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:45.865821   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:33:45.865840   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
	I0925 11:33:45.868353   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868774   59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
	I0925 11:33:45.868808   59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
	I0925 11:33:45.868936   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
	I0925 11:33:45.869132   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
	I0925 11:33:45.869260   59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
	I0925 11:33:45.869371   59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
	I0925 11:33:46.090766   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:33:46.090794   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 11:33:46.148251   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:33:46.244486   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:33:46.246747   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:33:46.246767   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:33:46.285706   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 11:33:46.285733   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 11:33:46.399367   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 11:33:46.399389   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 11:33:46.454580   59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.454598   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:33:46.478692   59899 node_ready.go:35] waiting up to 6m0s for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.478749   59899 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 11:33:46.478754   59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:33:46.478763   59899 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:33:46.478772   59899 cache_images.go:262] succeeded pushing to: embed-certs-094323
	I0925 11:33:46.478777   59899 cache_images.go:263] failed pushing to: 
	I0925 11:33:46.478797   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.478821   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479120   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479177   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.479190   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:46.479200   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:46.479138   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:46.479623   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:46.479632   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:46.495731   59899 node_ready.go:49] node "embed-certs-094323" has status "Ready":"True"
	I0925 11:33:46.495756   59899 node_ready.go:38] duration metric: took 17.032177ms waiting for node "embed-certs-094323" to be "Ready" ...
	I0925 11:33:46.495768   59899 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:46.502666   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:46.590707   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 11:33:46.590728   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 11:33:46.646116   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:33:46.836729   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 11:33:46.836758   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 11:33:47.081956   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 11:33:47.081978   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 11:33:47.372971   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 11:33:47.372999   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 11:33:47.548990   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 11:33:47.549016   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 11:33:47.759403   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 11:33:47.759425   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 11:33:48.094571   59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.094601   59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 11:33:48.300509   59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 11:33:48.523994   59899 pod_ready.go:102] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:49.536334   59899 pod_ready.go:92] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.536354   59899 pod_ready.go:81] duration metric: took 3.03366041s waiting for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.536365   59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.539583   59899 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539613   59899 pod_ready.go:81] duration metric: took 3.241249ms waiting for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:49.539624   59899 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
	I0925 11:33:49.539633   59899 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549714   59899 pod_ready.go:92] pod "etcd-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.549731   59899 pod_ready.go:81] duration metric: took 10.090379ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.549742   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554903   59899 pod_ready.go:92] pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.554917   59899 pod_ready.go:81] duration metric: took 5.167429ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.554927   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564229   59899 pod_ready.go:92] pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.564249   59899 pod_ready.go:81] duration metric: took 9.314363ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.564261   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.568126   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.41983793s)
	I0925 11:33:49.568187   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.323661752s)
	I0925 11:33:49.568232   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568239   59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.089462417s)
	I0925 11:33:49.568251   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568256   59899 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0925 11:33:49.568301   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568319   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568360   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.922215522s)
	I0925 11:33:49.568392   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568407   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568608   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568626   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568637   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568643   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568674   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568685   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568689   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.568695   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568697   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568704   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.568646   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568716   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.568725   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.568613   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568959   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.568977   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:49.569003   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569015   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569016   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569024   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569031   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:49.569036   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569045   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:49.569048   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.569033   59899 addons.go:467] Verifying addon metrics-server=true in "embed-certs-094323"
	I0925 11:33:49.569276   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:49.569292   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:49.883443   59899 pod_ready.go:92] pod "kube-proxy-pjwm2" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:49.883465   59899 pod_ready.go:81] duration metric: took 319.196098ms waiting for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:49.883477   59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292288   59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
	I0925 11:33:50.292314   59899 pod_ready.go:81] duration metric: took 408.829404ms waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:50.292325   59899 pod_ready.go:38] duration metric: took 3.79654573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:50.292349   59899 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:50.292413   59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:50.390976   59899 api_server.go:72] duration metric: took 4.572446849s to wait for apiserver process to appear ...
	I0925 11:33:50.390998   59899 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:50.391016   59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I0925 11:33:50.391107   59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.090546724s)
	I0925 11:33:50.391160   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391179   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391539   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391540   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391568   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.391584   59899 main.go:141] libmachine: Making call to close driver server
	I0925 11:33:50.391594   59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
	I0925 11:33:50.391810   59899 main.go:141] libmachine: Successfully made call to close driver server
	I0925 11:33:50.391822   59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
	I0925 11:33:50.391828   59899 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 11:33:50.393750   59899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-094323 addons enable metrics-server	
	
	
	I0925 11:33:50.395438   59899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0925 11:33:45.939462   57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
	I0925 11:33:47.439176   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439201   57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.439210   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.439218   57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.441757   57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441785   57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.441797   57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
	I0925 11:33:47.441806   57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	I0925 11:33:47.447728   57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447759   57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
	E0925 11:33:47.447770   57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
	I0925 11:33:47.447777   57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:33:47.447809   57426 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:33:47.447887   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:47.480326   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:47.480410   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:47.500790   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:47.500883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:47.521967   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:47.522043   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:47.542833   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:47.542921   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:47.564220   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:47.564296   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:47.585142   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:47.585233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:47.604606   57426 logs.go:284] 0 containers: []
	W0925 11:33:47.604638   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:47.604734   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:47.634903   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:47.634987   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:47.659599   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:47.659654   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:47.659677   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:47.713402   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:47.713441   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:47.746308   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:47.746347   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:47.777953   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:47.777991   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:47.933013   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:47.933041   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:47.959588   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:47.959623   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:47.989240   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:47.989285   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:48.069991   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:48.070022   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:48.107511   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.108197   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108438   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.108657   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:48.109661   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.109891   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.110800   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.111045   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111291   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.111524   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112518   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.112765   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.112989   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113221   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113444   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113656   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.113877   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.114848   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:48.115076   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115297   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115517   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115743   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.115978   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.116194   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.148933   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.150648   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.152304   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:48.152321   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:48.170706   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:48.170735   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:48.204533   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:48.204574   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:48.242201   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:48.242239   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:48.305874   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:48.305916   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:48.375041   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375074   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:48.375130   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:48.375142   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375161   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375169   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:48.375176   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:48.375185   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:48.375190   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:48.375199   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:33:50.396708   59899 addons.go:502] enable addons completed in 4.611221618s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0925 11:33:50.409202   59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I0925 11:33:50.411339   59899 api_server.go:141] control plane version: v1.28.2
	I0925 11:33:50.411356   59899 api_server.go:131] duration metric: took 20.35197ms to wait for apiserver health ...
	I0925 11:33:50.411366   59899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:33:50.490420   59899 system_pods.go:59] 8 kube-system pods found
	I0925 11:33:50.490453   59899 system_pods.go:61] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.490461   59899 system_pods.go:61] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.490468   59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.490476   59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.490483   59899 system_pods.go:61] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.490489   59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.490500   59899 system_pods.go:61] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.490515   59899 system_pods.go:61] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.490528   59899 system_pods.go:74] duration metric: took 79.155444ms to wait for pod list to return data ...
	I0925 11:33:50.490540   59899 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:33:50.691794   59899 default_sa.go:45] found service account: "default"
	I0925 11:33:50.691828   59899 default_sa.go:55] duration metric: took 201.27577ms for default service account to be created ...
	I0925 11:33:50.691838   59899 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:33:50.887600   59899 system_pods.go:86] 8 kube-system pods found
	I0925 11:33:50.887636   59899 system_pods.go:89] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
	I0925 11:33:50.887645   59899 system_pods.go:89] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
	I0925 11:33:50.887652   59899 system_pods.go:89] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
	I0925 11:33:50.887662   59899 system_pods.go:89] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
	I0925 11:33:50.887668   59899 system_pods.go:89] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
	I0925 11:33:50.887675   59899 system_pods.go:89] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
	I0925 11:33:50.887683   59899 system_pods.go:89] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:33:50.887694   59899 system_pods.go:89] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:33:50.887707   59899 system_pods.go:126] duration metric: took 195.862461ms to wait for k8s-apps to be running ...
	I0925 11:33:50.887718   59899 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:33:50.887769   59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:33:50.910382   59899 system_svc.go:56] duration metric: took 22.655864ms WaitForService to wait for kubelet.
	I0925 11:33:50.910410   59899 kubeadm.go:581] duration metric: took 5.091888107s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 11:33:50.910429   59899 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:33:51.083597   59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 11:33:51.083633   59899 node_conditions.go:123] node cpu capacity is 2
	I0925 11:33:51.083648   59899 node_conditions.go:105] duration metric: took 173.214402ms to run NodePressure ...
	I0925 11:33:51.083660   59899 start.go:228] waiting for startup goroutines ...
	I0925 11:33:51.083670   59899 start.go:233] waiting for cluster config update ...
	I0925 11:33:51.083682   59899 start.go:242] writing updated cluster config ...
	I0925 11:33:51.084016   59899 ssh_runner.go:195] Run: rm -f paused
	I0925 11:33:51.130189   59899 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 11:33:51.132357   59899 out.go:177] * Done! kubectl is now configured to use "embed-certs-094323" cluster and "default" namespace by default
	I0925 11:33:58.376816   57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:33:58.397417   57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
	I0925 11:33:58.397443   57426 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:33:58.397517   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:33:58.423312   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:33:58.423385   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:33:58.443439   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:33:58.443499   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:33:58.463360   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:33:58.463443   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:33:58.486151   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:33:58.486228   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:33:58.507009   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:33:58.507095   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:33:58.525571   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:33:58.525647   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:33:58.542397   57426 logs.go:284] 0 containers: []
	W0925 11:33:58.542424   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:33:58.542481   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:33:58.562186   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:33:58.562260   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:33:58.580984   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:33:58.581014   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:33:58.581030   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:33:58.731921   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:33:58.731958   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:33:58.759982   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:33:58.760017   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:33:58.817088   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:33:58.817120   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:33:58.851581   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.852006   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852226   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.852405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:33:58.853080   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.853245   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.853866   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.854027   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854211   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.854408   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855047   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.855223   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855403   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855601   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.855811   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856008   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856210   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.856868   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:33:58.857032   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857219   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857418   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857616   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.857814   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.858011   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:58.889357   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:58.891108   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:58.893160   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:33:58.893178   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:33:58.927223   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:33:58.927264   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:33:58.951343   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:33:58.951376   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:33:58.979268   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:33:58.979303   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:33:59.010031   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:33:59.010059   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:33:59.050333   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:33:59.050367   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:33:59.093782   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:33:59.093820   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:33:59.118196   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:33:59.118222   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:33:59.228267   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:33:59.228306   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:33:59.247426   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247459   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:33:59.247517   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:33:59.247534   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247545   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247554   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:33:59.247563   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:33:59.247574   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:33:59.247584   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:33:59.247597   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:09.249955   57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0925 11:34:09.256612   57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0925 11:34:09.257809   57426 api_server.go:141] control plane version: v1.16.0
	I0925 11:34:09.257827   57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
	I0925 11:34:09.257833   57426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:34:09.257883   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 11:34:09.280149   57426 logs.go:284] 1 containers: [34825b8222f1]
	I0925 11:34:09.280233   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 11:34:09.300127   57426 logs.go:284] 1 containers: [4b655f8475a9]
	I0925 11:34:09.300211   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 11:34:09.332581   57426 logs.go:284] 1 containers: [c4e353aa787b]
	I0925 11:34:09.332656   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 11:34:09.352994   57426 logs.go:284] 1 containers: [08dbfa6061b3]
	I0925 11:34:09.353061   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 11:34:09.374892   57426 logs.go:284] 1 containers: [2bccdb65c1cc]
	I0925 11:34:09.374960   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 11:34:09.395820   57426 logs.go:284] 1 containers: [59225a8740b7]
	I0925 11:34:09.395884   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 11:34:09.414225   57426 logs.go:284] 0 containers: []
	W0925 11:34:09.414245   57426 logs.go:286] No container was found matching "kindnet"
	I0925 11:34:09.414284   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0925 11:34:09.434336   57426 logs.go:284] 1 containers: [0f9de8bda7fb]
	I0925 11:34:09.434398   57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 11:34:09.456185   57426 logs.go:284] 1 containers: [90dc66317fc1]
	I0925 11:34:09.456218   57426 logs.go:123] Gathering logs for describe nodes ...
	I0925 11:34:09.456231   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 11:34:09.590378   57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
	I0925 11:34:09.590409   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
	I0925 11:34:09.617599   57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
	I0925 11:34:09.617624   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
	I0925 11:34:09.643431   57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
	I0925 11:34:09.643459   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
	I0925 11:34:09.665103   57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
	I0925 11:34:09.665129   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
	I0925 11:34:09.693931   57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
	I0925 11:34:09.693963   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
	I0925 11:34:09.742784   57426 logs.go:123] Gathering logs for Docker ...
	I0925 11:34:09.742812   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 11:34:09.804145   57426 logs.go:123] Gathering logs for dmesg ...
	I0925 11:34:09.804177   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 11:34:09.818586   57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
	I0925 11:34:09.818609   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
	I0925 11:34:09.857846   57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
	I0925 11:34:09.857875   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
	I0925 11:34:09.880799   57426 logs.go:123] Gathering logs for container status ...
	I0925 11:34:09.880828   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 11:34:09.950547   57426 logs.go:123] Gathering logs for kubelet ...
	I0925 11:34:09.950572   57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 11:34:09.983084   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.983479   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983617   57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.983758   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940    1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
	W0925 11:34:09.984405   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.984547   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985367   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.985576   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985713   57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.985898   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986632   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.986786   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.986945   57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987132   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987279   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987469   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.987663   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988255   57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	W0925 11:34:09.988398   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988533   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988685   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988822   57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.988958   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:09.989093   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.020550   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.022302   57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.024541   57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
	I0925 11:34:10.024558   57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
	I0925 11:34:10.053454   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053477   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0925 11:34:10.053524   57426 out.go:239] X Problems detected in kubelet:
	W0925 11:34:10.053535   57426 out.go:239]   Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053543   57426 out.go:239]   Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053551   57426 out.go:239]   Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540    1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0925 11:34:10.053557   57426 out.go:239]   Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939    6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
	W0925 11:34:10.053563   57426 out.go:239]   Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950    6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	I0925 11:34:10.053568   57426 out.go:309] Setting ErrFile to fd 2...
	I0925 11:34:10.053573   57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:34:20.061232   57426 system_pods.go:59] 8 kube-system pods found
	I0925 11:34:20.061260   57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.061267   57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.061271   57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.061277   57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.061284   57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.061288   57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.061295   57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.061300   57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.061307   57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
	I0925 11:34:20.061314   57426 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:34:20.064090   57426 default_sa.go:45] found service account: "default"
	I0925 11:34:20.064114   57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
	I0925 11:34:20.064123   57426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:34:20.068614   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.068644   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.068653   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.068674   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.068682   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.068690   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.068696   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.068707   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.068719   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.068739   57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.275900   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.275943   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.275952   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.275960   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.275967   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.275974   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.275982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.275992   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.276001   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.276021   57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
	I0925 11:34:20.579425   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:20.579469   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:20.579480   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:20.579489   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:20.579497   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:20.579506   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:20.579513   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:20.579522   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:20.579531   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:20.579553   57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.024313   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.024351   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.024360   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.024365   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.024372   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.024381   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.024390   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.024401   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.024411   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.024428   57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
	I0925 11:34:21.536419   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:21.536449   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:21.536460   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:21.536466   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:21.536470   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:21.536476   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:21.536480   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:21.536486   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:21.536492   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:21.536506   57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.027728   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.027766   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.027776   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.027783   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.027787   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.027796   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.027804   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.027814   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.027822   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.027838   57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
	I0925 11:34:22.714282   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:22.714315   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:22.714326   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:22.714335   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:22.714342   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:22.714349   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:22.714354   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:22.714365   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:22.714381   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:22.714399   57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
	I0925 11:34:23.438829   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:23.438855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:23.438862   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:23.438867   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:23.438872   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:23.438877   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:23.438882   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:23.438891   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:23.438898   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:23.438912   57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
	I0925 11:34:24.724821   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:24.724855   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:24.724864   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:24.724871   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:24.724878   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:24.724887   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:24.724894   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:24.724904   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:24.724919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:24.724942   57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
	I0925 11:34:26.488127   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:26.488156   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:26.488163   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:26.488182   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:26.488203   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:26.488213   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:26.488222   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:26.488232   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:26.488247   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:26.488266   57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
	I0925 11:34:27.921755   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:27.921783   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:27.921790   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:27.921795   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:27.921800   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:27.921805   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:27.921810   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:27.921815   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:27.921821   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:27.921835   57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
	I0925 11:34:29.885748   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:29.885776   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:29.885783   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:29.885789   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:29.885794   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:29.885799   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:29.885803   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:29.885810   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:29.885815   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:29.885830   57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
	I0925 11:34:32.946353   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:32.946383   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:32.946391   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:32.946396   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:32.946401   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:32.946406   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:32.946410   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:32.946416   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:32.946421   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:32.946434   57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
	I0925 11:34:36.713729   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:36.713754   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:36.713761   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:36.713767   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:36.713772   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:36.713777   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:36.713781   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:36.713788   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:36.713793   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:36.713807   57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
	I0925 11:34:41.454464   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:41.454492   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:41.454498   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:41.454503   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:41.454508   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:41.454513   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:41.454518   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:41.454524   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:41.454529   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:41.454542   57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
	I0925 11:34:46.159214   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:46.159255   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:46.159266   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:46.159275   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:46.159282   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:46.159292   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:46.159299   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:46.159314   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:46.159328   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:46.159350   57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
	I0925 11:34:51.672849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:51.672877   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:51.672884   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:51.672889   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:51.672894   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:51.672899   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:51.672905   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:51.672914   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:51.672919   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:51.672933   57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
	I0925 11:34:59.936057   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:34:59.936086   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:34:59.936094   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:34:59.936099   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:34:59.936104   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:34:59.936109   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:34:59.936114   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:34:59.936119   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:34:59.936125   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:34:59.936139   57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
	I0925 11:35:09.479385   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:09.479413   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:09.479420   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:09.479428   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:09.479433   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:09.479441   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:09.479446   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:09.479452   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:09.479459   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:09.479471   57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
	I0925 11:35:22.964926   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:22.964955   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:22.964962   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:22.964967   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:22.964972   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:22.964977   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:22.964982   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:22.964988   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:22.964993   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:22.965006   57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
	I0925 11:35:37.171988   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:37.172022   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:37.172034   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:37.172041   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:37.172048   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:37.172055   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:37.172061   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:37.172072   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:37.172083   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:37.172101   57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
	I0925 11:35:54.452675   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:35:54.452702   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:35:54.452709   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:35:54.452714   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:35:54.452719   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:35:54.452727   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:35:54.452731   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:35:54.452738   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:35:54.452743   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:35:54.452756   57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
	I0925 11:36:22.755662   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:22.755700   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:22.755710   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:22.755718   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:22.755724   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:22.755732   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:22.755746   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:22.755761   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:22.755771   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:22.755791   57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
	I0925 11:36:58.289849   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:36:58.289887   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:36:58.289896   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:36:58.289901   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:36:58.289910   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:36:58.289919   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:36:58.289927   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:36:58.289939   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:36:58.289950   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:36:58.289971   57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
	I0925 11:37:42.356673   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:37:42.356698   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:37:42.356705   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:37:42.356710   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:37:42.356715   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:37:42.356721   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:37:42.356725   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:37:42.356731   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:37:42.356736   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:37:42.356752   57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
	I0925 11:38:30.124408   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:38:30.124436   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:38:30.124443   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:38:30.124449   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:38:30.124454   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:38:30.124459   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:38:30.124464   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:38:30.124470   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:38:30.124475   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:38:30.124490   57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
	I0925 11:39:18.680525   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:39:18.680555   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:39:18.680561   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:39:18.680567   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:39:18.680572   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:39:18.680578   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:39:18.680582   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:39:18.680589   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:39:18.680594   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:39:18.680607   57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
	I0925 11:40:11.783486   57426 system_pods.go:86] 8 kube-system pods found
	I0925 11:40:11.783513   57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:40:11.783520   57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
	I0925 11:40:11.783527   57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
	I0925 11:40:11.783532   57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
	I0925 11:40:11.783537   57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 11:40:11.783542   57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
	I0925 11:40:11.783548   57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:40:11.783553   57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 11:40:11.786119   57426 out.go:177] 
	W0925 11:40:11.787697   57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W0925 11:40:11.787711   57426 out.go:239] * 
	W0925 11:40:11.788461   57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:40:11.790057   57426 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:51:28 UTC. --
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572406518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572497492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572525871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572544812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618491365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618680379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618696521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618704838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155674989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155883992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156004251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156243152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.045907108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046033975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046090982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046108215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.109068079Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462862941Z" level=info msg="shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462964770Z" level=warning msg="cleaning up after shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462982909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.463078511Z" level=info msg="ignoring event" container=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824501229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824684623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824701374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824713075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* time="2023-09-25T11:51:28Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                        COMMAND                  CREATED          STATUS                      PORTS     NAMES
	0f9de8bda7fb   kubernetesui/dashboard       "/dashboard --insecu…"   20 minutes ago   Up 20 minutes                         k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	5d3673792ccf   registry.k8s.io/echoserver   "nginx -g 'daemon of…"   20 minutes ago   Exited (1) 20 minutes ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	90dc66317fc1   6e38f40d628d                 "/storage-provisioner"   20 minutes ago   Up 20 minutes                         k8s_storage-provisioner_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	b16fb26ba287   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
	4eb82cb0fa23   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
	802d2fbd8809   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
	6a94e2e5690b   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_metrics-server-74d5856cc6-wbskx_kube-system_5925c507-8225-4b9c-b89e-13346451d090_0
	c4e353aa787b   bf261d157914                 "/coredns -conf /etc…"   20 minutes ago   Up 20 minutes                         k8s_coredns_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	2bccdb65c1cc   c21b0c7400f9                 "/usr/local/bin/kube…"   20 minutes ago   Up 20 minutes                         k8s_kube-proxy_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	2088f3a7c0bc   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
	75c3319baa09   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Up 20 minutes                         k8s_POD_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
	eb63d31189ed   k8s.gcr.io/pause:3.1         "/pause"                 20 minutes ago   Created                               k8s_POD_coredns-5644d7b6d9-rn247_kube-system_f0e633d0-75fb-4406-928a-ec680c4052fa_0
	4b655f8475a9   b2756210eeab                 "etcd --advertise-cl…"   21 minutes ago   Up 21 minutes                         k8s_etcd_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	08dbfa6061b3   301ddc62b80b                 "kube-scheduler --au…"   21 minutes ago   Up 21 minutes                         k8s_kube-scheduler_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	59225a8740b7   06a629a7e51c                 "kube-controller-man…"   21 minutes ago   Up 21 minutes                         k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	34825b8222f1   b305571ca60a                 "kube-apiserver --ad…"   21 minutes ago   Up 21 minutes                         k8s_kube-apiserver_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	5b274efecb4d   k8s.gcr.io/pause:3.1         "/pause"                 21 minutes ago   Up 21 minutes                         k8s_POD_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
	6e623a69a033   k8s.gcr.io/pause:3.1         "/pause"                 21 minutes ago   Up 21 minutes                         k8s_POD_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	961cf08898d9   k8s.gcr.io/pause:3.1         "/pause"                 21 minutes ago   Up 21 minutes                         k8s_POD_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	713ec26ea888   k8s.gcr.io/pause:3.1         "/pause"                 21 minutes ago   Up 21 minutes                         k8s_POD_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
	
	* 
	* ==> coredns [c4e353aa787b] <==
	* .:53
	2023-09-25T11:30:47.501Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-25T11:30:47.501Z [INFO] CoreDNS-1.6.2
	2023-09-25T11:30:47.501Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-694015
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=old-k8s-version-694015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:30:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:51:21 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:51:21 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:51:21 +0000   Mon, 25 Sep 2023 11:30:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 25 Sep 2023 11:51:21 +0000   Mon, 25 Sep 2023 11:48:50 +0000   KubeletNotReady              PLEG is not healthy: pleg was last seen active 5m33.078918151s ago; threshold is 3m0s
	Addresses:
	  InternalIP:  192.168.50.17
	  Hostname:    old-k8s-version-694015
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 1bd5d978d1e543b686646b2c32f30862
	 System UUID:                1bd5d978-d1e5-43b6-8664-6b2c32f30862
	 Boot ID:                    5678d5b5-5910-4d2d-a245-2b8fc64bd779
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qnqxm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                etcd-old-k8s-version-694015                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-apiserver-old-k8s-version-694015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-controller-manager-old-k8s-version-694015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-proxy-gsdzk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                kube-scheduler-old-k8s-version-694015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                metrics-server-74d5856cc6-wbskx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-mxvxx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-z674w             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)    kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)    kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                  kube-proxy, old-k8s-version-694015  Starting kube-proxy.
	  Normal  NodeReady                5m39s                kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeReady
	  Normal  NodeNotReady             2m38s (x2 over 17m)  kubelet, old-k8s-version-694015     Node old-k8s-version-694015 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.807712] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166866] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.627379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep25 11:25] systemd-fstab-generator[508]: Ignoring "noauto" for root device
	[  +0.112649] systemd-fstab-generator[519]: Ignoring "noauto" for root device
	[  +1.250517] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.395221] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.132329] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.148539] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +6.146658] systemd-fstab-generator[1181]: Ignoring "noauto" for root device
	[  +1.531877] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.077793] systemd-fstab-generator[1658]: Ignoring "noauto" for root device
	[  +0.487565] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.199945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.809912] kauditd_printk_skb: 5 callbacks suppressed
	[Sep25 11:26] hrtimer: interrupt took 6685373 ns
	[Sep25 11:30] systemd-fstab-generator[6846]: Ignoring "noauto" for root device
	[Sep25 11:31] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [4b655f8475a9] <==
	* 2023-09-25 11:30:21.604807 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-25 11:30:21.607417 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 11:30:21.608224 I | etcdserver: a74ab9f845be4a88 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-25 11:30:21.609008 I | etcdserver/membership: added member a74ab9f845be4a88 [https://192.168.50.17:2380] to cluster e7a7808069af5882
	2023-09-25 11:30:21.609764 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-25 11:30:21.610013 I | embed: listening for metrics on http://192.168.50.17:2381
	2023-09-25 11:30:22.316022 I | raft: a74ab9f845be4a88 is starting a new election at term 1
	2023-09-25 11:30:22.316075 I | raft: a74ab9f845be4a88 became candidate at term 2
	2023-09-25 11:30:22.316089 I | raft: a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316099 I | raft: a74ab9f845be4a88 became leader at term 2
	2023-09-25 11:30:22.316104 I | raft: raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2
	2023-09-25 11:30:22.316356 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-25 11:30:22.318109 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-25 11:30:22.318162 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-25 11:30:22.318191 I | etcdserver: published {Name:old-k8s-version-694015 ClientURLs:[https://192.168.50.17:2379]} to cluster e7a7808069af5882
	2023-09-25 11:30:22.318197 I | embed: ready to serve client requests
	2023-09-25 11:30:22.318821 I | embed: ready to serve client requests
	2023-09-25 11:30:22.319844 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 11:30:22.319991 I | embed: serving client requests on 192.168.50.17:2379
	2023-09-25 11:40:22.349070 I | mvcc: store.index: compact 705
	2023-09-25 11:40:22.356379 I | mvcc: finished scheduled compaction at 705 (took 6.531112ms)
	2023-09-25 11:45:22.355942 I | mvcc: store.index: compact 946
	2023-09-25 11:45:22.358397 I | mvcc: finished scheduled compaction at 946 (took 1.629731ms)
	2023-09-25 11:50:22.362938 I | mvcc: store.index: compact 1190
	2023-09-25 11:50:22.365539 I | mvcc: finished scheduled compaction at 1190 (took 1.728551ms)
	
	* 
	* ==> kernel <==
	*  11:51:28 up 26 min,  0 users,  load average: 0.06, 0.15, 0.21
	Linux old-k8s-version-694015 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [34825b8222f1] <==
	* I0925 11:45:26.973699       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:45:26.973970       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:45:26.974212       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:45:26.974466       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:46:26.975055       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:46:26.975165       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:46:26.975224       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:46:26.975233       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:48:26.975907       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:48:26.976230       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:48:26.976641       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:48:26.976828       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:50:26.978474       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:50:26.978688       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:50:26.978924       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:50:26.978966       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:51:26.979276       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:51:26.979403       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:51:26.979671       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:51:26.979708       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [59225a8740b7] <==
	* W0925 11:45:41.951828       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0925 11:45:50.432124       1 node_lifecycle_controller.go:1085] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0925 11:45:53.471749       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:46:13.953840       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:46:23.724212       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:46:45.956748       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:46:53.976331       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:47:17.958827       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:47:24.228167       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:47:49.960817       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:47:54.479994       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:48:21.963142       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:48:24.732358       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:48:53.965927       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:48:54.984758       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0925 11:48:55.445015       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	E0925 11:49:25.237075       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:49:25.967731       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:49:55.490117       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:49:57.970026       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:50:25.742286       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:50:29.972384       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:50:55.994482       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:51:01.974121       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:51:26.246277       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [2bccdb65c1cc] <==
	* W0925 11:30:47.128400       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0925 11:30:47.177538       1 node.go:135] Successfully retrieved node IP: 192.168.50.17
	I0925 11:30:47.177648       1 server_others.go:149] Using iptables Proxier.
	I0925 11:30:47.271820       1 server.go:529] Version: v1.16.0
	I0925 11:30:47.304914       1 config.go:313] Starting service config controller
	I0925 11:30:47.305050       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0925 11:30:47.305152       1 config.go:131] Starting endpoints config controller
	I0925 11:30:47.305167       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0925 11:30:47.424722       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0925 11:30:47.424968       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [08dbfa6061b3] <==
	* W0925 11:30:25.965118       1 authentication.go:79] Authentication is disabled
	I0925 11:30:25.965128       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0925 11:30:25.969940       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0925 11:30:26.032268       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:26.032513       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:26.034880       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:26.035163       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:26.035326       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:26.035758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:26.041977       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:26.042199       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:26.042371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:26.043936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:26.044107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.035540       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:30:27.039764       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:30:27.039841       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:30:27.044797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:30:27.047742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:30:27.047784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:30:27.049796       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:30:27.051510       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:30:27.054657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:30:27.058480       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:30:27.061633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:51:28 UTC. --
	Sep 25 11:49:25 old-k8s-version-694015 kubelet[6852]: I0925 11:49:25.054130    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m37.107241048s ago; threshold is 3m0s
	Sep 25 11:49:30 old-k8s-version-694015 kubelet[6852]: I0925 11:49:30.054473    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m42.107675156s ago; threshold is 3m0s
	Sep 25 11:49:35 old-k8s-version-694015 kubelet[6852]: I0925 11:49:35.055194    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m47.108386341s ago; threshold is 3m0s
	Sep 25 11:49:40 old-k8s-version-694015 kubelet[6852]: I0925 11:49:40.056067    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m52.109267231s ago; threshold is 3m0s
	Sep 25 11:49:45 old-k8s-version-694015 kubelet[6852]: I0925 11:49:45.056834    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 3m57.110033654s ago; threshold is 3m0s
	Sep 25 11:49:50 old-k8s-version-694015 kubelet[6852]: I0925 11:49:50.057199    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m2.110391567s ago; threshold is 3m0s
	Sep 25 11:49:55 old-k8s-version-694015 kubelet[6852]: I0925 11:49:55.058022    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m7.111210181s ago; threshold is 3m0s
	Sep 25 11:50:00 old-k8s-version-694015 kubelet[6852]: I0925 11:50:00.058928    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m12.112129989s ago; threshold is 3m0s
	Sep 25 11:50:05 old-k8s-version-694015 kubelet[6852]: I0925 11:50:05.059540    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m17.112732126s ago; threshold is 3m0s
	Sep 25 11:50:10 old-k8s-version-694015 kubelet[6852]: I0925 11:50:10.060624    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m22.113731411s ago; threshold is 3m0s
	Sep 25 11:50:15 old-k8s-version-694015 kubelet[6852]: I0925 11:50:15.061057    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m27.114238899s ago; threshold is 3m0s
	Sep 25 11:50:20 old-k8s-version-694015 kubelet[6852]: I0925 11:50:20.061408    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m32.114596189s ago; threshold is 3m0s
	Sep 25 11:50:25 old-k8s-version-694015 kubelet[6852]: I0925 11:50:25.062382    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m37.115558243s ago; threshold is 3m0s
	Sep 25 11:50:30 old-k8s-version-694015 kubelet[6852]: I0925 11:50:30.062703    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m42.115888562s ago; threshold is 3m0s
	Sep 25 11:50:35 old-k8s-version-694015 kubelet[6852]: I0925 11:50:35.063455    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m47.116655511s ago; threshold is 3m0s
	Sep 25 11:50:40 old-k8s-version-694015 kubelet[6852]: I0925 11:50:40.064272    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m52.117437987s ago; threshold is 3m0s
	Sep 25 11:50:45 old-k8s-version-694015 kubelet[6852]: I0925 11:50:45.064692    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 4m57.117880228s ago; threshold is 3m0s
	Sep 25 11:50:50 old-k8s-version-694015 kubelet[6852]: I0925 11:50:50.065447    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m2.118646298s ago; threshold is 3m0s
	Sep 25 11:50:55 old-k8s-version-694015 kubelet[6852]: I0925 11:50:55.065847    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m7.119046121s ago; threshold is 3m0s
	Sep 25 11:51:00 old-k8s-version-694015 kubelet[6852]: I0925 11:51:00.066450    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m12.11965437s ago; threshold is 3m0s
	Sep 25 11:51:05 old-k8s-version-694015 kubelet[6852]: I0925 11:51:05.066729    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m17.119922321s ago; threshold is 3m0s
	Sep 25 11:51:10 old-k8s-version-694015 kubelet[6852]: I0925 11:51:10.067077    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m22.120276038s ago; threshold is 3m0s
	Sep 25 11:51:15 old-k8s-version-694015 kubelet[6852]: I0925 11:51:15.068099    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m27.12128936s ago; threshold is 3m0s
	Sep 25 11:51:20 old-k8s-version-694015 kubelet[6852]: I0925 11:51:20.068433    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m32.121637241s ago; threshold is 3m0s
	Sep 25 11:51:25 old-k8s-version-694015 kubelet[6852]: I0925 11:51:25.068804    6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 5m37.122002994s ago; threshold is 3m0s
	
	* 
	* ==> kubernetes-dashboard [0f9de8bda7fb] <==
	* 2023/09/25 11:39:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:39:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:40:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:40:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:41:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:41:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:42:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:42:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:43:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:43:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:44:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:44:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:45:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:45:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:46:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:46:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:47:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:47:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:48:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:48:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:49:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:49:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:50:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:50:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:51:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [90dc66317fc1] <==
	* I0925 11:30:51.322039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 11:30:51.347548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 11:30:51.348062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 11:30:51.364910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 11:30:51.365497       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	I0925 11:30:51.368701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82068dcb-41ed-493c-a127-6ea04652eda5", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa became leader
	I0925 11:30:51.466721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-694015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1 (63.170092ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-qnqxm" not found
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wbskx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-d6b4b5544-mxvxx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-z674w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (133.22s)

                                                
                                    

Test pass (281/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.2/json-events 4.18
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.53
20 TestOffline 162.76
22 TestAddons/Setup 151.4
24 TestAddons/parallel/Registry 15.96
25 TestAddons/parallel/Ingress 26.21
26 TestAddons/parallel/InspektorGadget 10.91
27 TestAddons/parallel/MetricsServer 5.93
28 TestAddons/parallel/HelmTiller 13.15
30 TestAddons/parallel/CSI 66.53
31 TestAddons/parallel/Headlamp 15.28
32 TestAddons/parallel/CloudSpanner 5.6
35 TestAddons/serial/GCPAuth/Namespaces 0.13
36 TestAddons/StoppedEnableDisable 13.33
37 TestCertOptions 111.99
38 TestCertExpiration 297.95
39 TestDockerFlags 133.05
40 TestForceSystemdFlag 51.97
41 TestForceSystemdEnv 67.43
43 TestKVMDriverInstallOrUpdate 3.12
47 TestErrorSpam/setup 52.45
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.75
50 TestErrorSpam/pause 1.19
51 TestErrorSpam/unpause 1.24
52 TestErrorSpam/stop 4.19
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 77.12
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 36.24
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.52
64 TestFunctional/serial/CacheCmd/cache/add_local 1.29
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.19
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 41.81
73 TestFunctional/serial/ComponentHealth 0.08
74 TestFunctional/serial/LogsCmd 1.26
75 TestFunctional/serial/LogsFileCmd 1.2
76 TestFunctional/serial/InvalidService 5.2
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 15.95
80 TestFunctional/parallel/DryRun 0.29
81 TestFunctional/parallel/InternationalLanguage 0.15
82 TestFunctional/parallel/StatusCmd 1.02
86 TestFunctional/parallel/ServiceCmdConnect 12.55
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 58.04
90 TestFunctional/parallel/SSHCmd 0.48
91 TestFunctional/parallel/CpCmd 0.98
92 TestFunctional/parallel/MySQL 44.26
93 TestFunctional/parallel/FileSync 0.32
94 TestFunctional/parallel/CertSync 1.41
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
102 TestFunctional/parallel/License 0.18
112 TestFunctional/parallel/ServiceCmd/DeployApp 13.22
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
114 TestFunctional/parallel/ProfileCmd/profile_list 0.32
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
116 TestFunctional/parallel/MountCmd/any-port 9.6
117 TestFunctional/parallel/MountCmd/specific-port 1.94
118 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
119 TestFunctional/parallel/ServiceCmd/List 0.31
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
122 TestFunctional/parallel/ServiceCmd/Format 0.39
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.58
128 TestFunctional/parallel/ImageCommands/Setup 1.62
129 TestFunctional/parallel/ServiceCmd/URL 0.4
130 TestFunctional/parallel/Version/short 0.05
131 TestFunctional/parallel/Version/components 0.92
132 TestFunctional/parallel/DockerEnv/bash 1.29
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.67
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.62
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.34
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.23
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.83
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.03
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.02
145 TestFunctional/delete_minikube_cached_images 0.01
146 TestGvisorAddon 279.64
149 TestImageBuild/serial/Setup 53.63
150 TestImageBuild/serial/NormalBuild 1.58
151 TestImageBuild/serial/BuildWithBuildArg 1.27
152 TestImageBuild/serial/BuildWithDockerIgnore 0.37
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
156 TestIngressAddonLegacy/StartLegacyK8sCluster 78.06
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.44
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 33.18
163 TestJSONOutput/start/Command 64.18
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.58
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.54
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 8.09
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.19
191 TestMainNoArgs 0.04
192 TestMinikubeProfile 103.92
195 TestMountStart/serial/StartWithMountFirst 31.25
196 TestMountStart/serial/VerifyMountFirst 0.37
197 TestMountStart/serial/StartWithMountSecond 28.73
198 TestMountStart/serial/VerifyMountSecond 0.37
199 TestMountStart/serial/DeleteFirst 0.87
200 TestMountStart/serial/VerifyMountPostDelete 0.37
201 TestMountStart/serial/Stop 2.07
202 TestMountStart/serial/RestartStopped 24.75
203 TestMountStart/serial/VerifyMountPostStop 0.38
206 TestMultiNode/serial/FreshStart2Nodes 121.01
207 TestMultiNode/serial/DeployApp2Nodes 6.13
208 TestMultiNode/serial/PingHostFrom2Pods 0.84
209 TestMultiNode/serial/AddNode 45.77
210 TestMultiNode/serial/ProfileList 0.2
211 TestMultiNode/serial/CopyFile 7.32
212 TestMultiNode/serial/StopNode 3.94
213 TestMultiNode/serial/StartAfterStop 32.31
214 TestMultiNode/serial/RestartKeepsNodes 174.98
215 TestMultiNode/serial/DeleteNode 1.72
216 TestMultiNode/serial/StopMultiNode 25.61
217 TestMultiNode/serial/RestartMultiNode 134.47
218 TestMultiNode/serial/ValidateNameConflict 54.42
223 TestPreload 207.57
225 TestScheduledStopUnix 123.13
226 TestSkaffold 139.05
229 TestRunningBinaryUpgrade 197.61
231 TestKubernetesUpgrade 205.29
244 TestStoppedBinaryUpgrade/Setup 0.3
245 TestStoppedBinaryUpgrade/Upgrade 205.35
247 TestPause/serial/Start 94
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
257 TestNoKubernetes/serial/StartWithK8s 60.4
258 TestNetworkPlugins/group/auto/Start 123.96
259 TestPause/serial/SecondStartNoReconfiguration 91.68
260 TestNoKubernetes/serial/StartWithStopK8s 42.57
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
262 TestNetworkPlugins/group/flannel/Start 88.63
263 TestNoKubernetes/serial/Start 46.71
264 TestPause/serial/Pause 0.8
265 TestPause/serial/VerifyStatus 0.31
266 TestPause/serial/Unpause 0.67
267 TestPause/serial/PauseAgain 0.99
268 TestPause/serial/DeletePaused 1.26
269 TestPause/serial/VerifyDeletedResources 0.7
270 TestNetworkPlugins/group/kindnet/Start 91.28
271 TestNetworkPlugins/group/auto/KubeletFlags 0.19
272 TestNetworkPlugins/group/auto/NetCatPod 12.3
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
274 TestNoKubernetes/serial/ProfileList 1.05
275 TestNoKubernetes/serial/Stop 2.2
276 TestNoKubernetes/serial/StartNoArgs 47.9
277 TestNetworkPlugins/group/auto/DNS 0.22
278 TestNetworkPlugins/group/auto/Localhost 0.19
279 TestNetworkPlugins/group/auto/HairPin 0.17
280 TestNetworkPlugins/group/enable-default-cni/Start 133.22
281 TestNetworkPlugins/group/flannel/ControllerPod 5.03
282 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
283 TestNetworkPlugins/group/flannel/NetCatPod 12.57
284 TestNetworkPlugins/group/flannel/DNS 0.18
285 TestNetworkPlugins/group/flannel/Localhost 0.15
286 TestNetworkPlugins/group/flannel/HairPin 0.18
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
288 TestNetworkPlugins/group/bridge/Start 97.93
289 TestNetworkPlugins/group/kubenet/Start 118.35
290 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
291 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
292 TestNetworkPlugins/group/kindnet/NetCatPod 15.32
293 TestNetworkPlugins/group/kindnet/DNS 0.19
294 TestNetworkPlugins/group/kindnet/Localhost 0.16
295 TestNetworkPlugins/group/kindnet/HairPin 0.16
296 TestNetworkPlugins/group/custom-flannel/Start 95.99
297 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
298 TestNetworkPlugins/group/bridge/NetCatPod 13.33
299 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
300 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.34
301 TestNetworkPlugins/group/bridge/DNS 0.24
302 TestNetworkPlugins/group/bridge/Localhost 0.21
303 TestNetworkPlugins/group/bridge/HairPin 0.22
304 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
307 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
308 TestNetworkPlugins/group/kubenet/NetCatPod 12.44
309 TestNetworkPlugins/group/calico/Start 106.18
310 TestNetworkPlugins/group/false/Start 104.37
311 TestNetworkPlugins/group/kubenet/DNS 0.19
312 TestNetworkPlugins/group/kubenet/Localhost 0.16
313 TestNetworkPlugins/group/kubenet/HairPin 0.17
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.4
317 TestStartStop/group/old-k8s-version/serial/FirstStart 168.9
318 TestNetworkPlugins/group/custom-flannel/DNS 0.22
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
322 TestStartStop/group/no-preload/serial/FirstStart 155.06
323 TestNetworkPlugins/group/calico/ControllerPod 5.03
324 TestNetworkPlugins/group/false/KubeletFlags 0.23
325 TestNetworkPlugins/group/false/NetCatPod 12.4
326 TestNetworkPlugins/group/calico/KubeletFlags 0.22
327 TestNetworkPlugins/group/calico/NetCatPod 12.46
328 TestNetworkPlugins/group/false/DNS 0.22
329 TestNetworkPlugins/group/false/Localhost 0.15
330 TestNetworkPlugins/group/false/HairPin 0.17
331 TestNetworkPlugins/group/calico/DNS 0.24
332 TestNetworkPlugins/group/calico/Localhost 0.16
333 TestNetworkPlugins/group/calico/HairPin 0.18
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.27
337 TestStartStop/group/newest-cni/serial/FirstStart 100.42
338 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
339 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
340 TestStartStop/group/old-k8s-version/serial/Stop 13.22
341 TestStartStop/group/no-preload/serial/DeployApp 8.44
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.3
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/no-preload/serial/Stop 13.23
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
350 TestStartStop/group/no-preload/serial/SecondStart 314.81
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 332.58
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
355 TestStartStop/group/newest-cni/serial/Stop 13.14
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
357 TestStartStop/group/newest-cni/serial/SecondStart 77.83
358 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
361 TestStartStop/group/newest-cni/serial/Pause 2.57
363 TestStartStop/group/embed-certs/serial/FirstStart 73.02
364 TestStartStop/group/embed-certs/serial/DeployApp 10.41
365 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
366 TestStartStop/group/embed-certs/serial/Stop 13.12
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
368 TestStartStop/group/embed-certs/serial/SecondStart 332.53
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
372 TestStartStop/group/no-preload/serial/Pause 2.77
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.65
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.02
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/embed-certs/serial/Pause 2.33
x
+
TestDownloadOnly/v1.16.0/json-events (7.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-624417 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-624417 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (7.989798145s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-624417
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-624417: exit status 85 (50.664631ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-624417 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-624417        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:33:34
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:33:34.893770   13225 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:33:34.894022   13225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:34.894030   13225 out.go:309] Setting ErrFile to fd 2...
	I0925 10:33:34.894035   13225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:34.894221   13225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	W0925 10:33:34.894331   13225 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17297-6032/.minikube/config/config.json: open /home/jenkins/minikube-integration/17297-6032/.minikube/config/config.json: no such file or directory
	I0925 10:33:34.894941   13225 out.go:303] Setting JSON to true
	I0925 10:33:34.895931   13225 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":966,"bootTime":1695637049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:33:34.895989   13225 start.go:138] virtualization: kvm guest
	I0925 10:33:34.898193   13225 out.go:97] [download-only-624417] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:33:34.899599   13225 out.go:169] MINIKUBE_LOCATION=17297
	W0925 10:33:34.898340   13225 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 10:33:34.898350   13225 notify.go:220] Checking for updates...
	I0925 10:33:34.902173   13225 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:33:34.903601   13225 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 10:33:34.904961   13225 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 10:33:34.906210   13225 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0925 10:33:34.908516   13225 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 10:33:34.908721   13225 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:33:35.023580   13225 out.go:97] Using the kvm2 driver based on user configuration
	I0925 10:33:35.023602   13225 start.go:298] selected driver: kvm2
	I0925 10:33:35.023606   13225 start.go:902] validating driver "kvm2" against <nil>
	I0925 10:33:35.023910   13225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:33:35.024014   13225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 10:33:35.037900   13225 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0925 10:33:35.037941   13225 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 10:33:35.038396   13225 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0925 10:33:35.038551   13225 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 10:33:35.038583   13225 cni.go:84] Creating CNI manager for ""
	I0925 10:33:35.038599   13225 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 10:33:35.038606   13225 start_flags.go:321] config:
	{Name:download-only-624417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-624417 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:33:35.038791   13225 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:33:35.040524   13225 out.go:97] Downloading VM boot image ...
	I0925 10:33:35.040549   13225 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0925 10:33:37.576369   13225 out.go:97] Starting control plane node download-only-624417 in cluster download-only-624417
	I0925 10:33:37.576393   13225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 10:33:37.603103   13225 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0925 10:33:37.603135   13225 cache.go:57] Caching tarball of preloaded images
	I0925 10:33:37.603304   13225 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 10:33:37.605137   13225 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0925 10:33:37.605159   13225 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0925 10:33:37.637480   13225 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-624417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (4.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-624417 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-624417 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (4.178160949s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (4.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-624417
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-624417: exit status 85 (52.300052ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-624417 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-624417        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-624417 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-624417        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:33:42
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:33:42.935832   13282 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:33:42.936064   13282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:42.936072   13282 out.go:309] Setting ErrFile to fd 2...
	I0925 10:33:42.936077   13282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:42.936257   13282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	W0925 10:33:42.936357   13282 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17297-6032/.minikube/config/config.json: open /home/jenkins/minikube-integration/17297-6032/.minikube/config/config.json: no such file or directory
	I0925 10:33:42.936744   13282 out.go:303] Setting JSON to true
	I0925 10:33:42.937534   13282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":974,"bootTime":1695637049,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:33:42.937600   13282 start.go:138] virtualization: kvm guest
	I0925 10:33:42.939440   13282 out.go:97] [download-only-624417] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:33:42.940948   13282 out.go:169] MINIKUBE_LOCATION=17297
	I0925 10:33:42.939571   13282 notify.go:220] Checking for updates...
	I0925 10:33:42.943583   13282 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:33:42.944994   13282 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 10:33:42.946270   13282 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 10:33:42.947594   13282 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-624417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-624417
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-950928 --alsologtostderr --binary-mirror http://127.0.0.1:38157 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-950928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-950928
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (162.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-605390 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-605390 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m41.769103533s)
helpers_test.go:175: Cleaning up "offline-docker-605390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-605390
--- PASS: TestOffline (162.76s)

                                                
                                    
x
+
TestAddons/Setup (151.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-686386 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-686386 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.403543348s)
--- PASS: TestAddons/Setup (151.40s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 21.067738ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2nh4j" [90b5b339-934b-407a-a8c5-b4767b9fdbf3] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023741173s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w78ct" [9e573389-e5bb-4c7a-bf45-d7542b14f1f7] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012814922s
addons_test.go:316: (dbg) Run:  kubectl --context addons-686386 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-686386 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-686386 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.060820216s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 ip
2023/09/25 10:36:34 [DEBUG] GET http://192.168.39.220:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-686386 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-686386 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-686386 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e78e6662-8e87-4c65-b8dd-424b74b26c63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e78e6662-8e87-4c65-b8dd-424b74b26c63] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.01954273s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-686386 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.220
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-686386 addons disable ingress --alsologtostderr -v=1: (7.756244086s)
--- PASS: TestAddons/parallel/Ingress (26.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wwsr7" [a809a3c8-ae01-4199-bfb3-1ae227712c5d] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014292315s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-686386
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-686386: (5.899019785s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.641946ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-v4krp" [02bb89eb-a9fe-4613-8989-0d9d44755a83] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012779436s
addons_test.go:391: (dbg) Run:  kubectl --context addons-686386 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.15s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.853937ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-wrqwr" [de45a0e8-0419-488a-b879-3bc7bcf92d3b] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.030751944s
addons_test.go:449: (dbg) Run:  kubectl --context addons-686386 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-686386 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.491745745s)
addons_test.go:454: kubectl --context addons-686386 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 25.001686ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-686386 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-686386 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [401ba6fe-8fa6-469b-820b-a39eb8724d69] Pending
helpers_test.go:344: "task-pv-pod" [401ba6fe-8fa6-469b-820b-a39eb8724d69] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [401ba6fe-8fa6-469b-820b-a39eb8724d69] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.015175751s
addons_test.go:560: (dbg) Run:  kubectl --context addons-686386 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-686386 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-686386 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-686386 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-686386 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-686386 delete pod task-pv-pod: (1.19668335s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-686386 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-686386 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686386 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-686386 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e7df7064-20ae-48dd-9171-1a49a8d312f2] Pending
helpers_test.go:344: "task-pv-pod-restore" [e7df7064-20ae-48dd-9171-1a49a8d312f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e7df7064-20ae-48dd-9171-1a49a8d312f2] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.019749145s
addons_test.go:602: (dbg) Run:  kubectl --context addons-686386 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-686386 delete pod task-pv-pod-restore: (1.139756891s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-686386 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-686386 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-686386 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.79174411s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-686386 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-686386 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-686386 --alsologtostderr -v=1: (1.268577964s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-rsgnb" [fca5dbd4-c2d0-4f99-aa9f-8ca7e052d7f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-rsgnb" [fca5dbd4-c2d0-4f99-aa9f-8ca7e052d7f3] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.011311575s
--- PASS: TestAddons/parallel/Headlamp (15.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-9rctm" [9a1c997f-dad2-4cfe-aada-98bbc674bba9] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016075405s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-686386
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-686386 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-686386 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-686386
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-686386: (13.087002389s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-686386
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-686386
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-686386
--- PASS: TestAddons/StoppedEnableDisable (13.33s)

                                                
                                    
x
+
TestCertOptions (111.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-795820 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-795820 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m50.419317984s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-795820 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-795820 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-795820 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-795820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-795820
--- PASS: TestCertOptions (111.99s)

                                                
                                    
x
+
TestCertExpiration (297.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-736070 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-736070 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m18.393073978s)
E0925 11:10:25.176113   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-736070 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-736070 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (38.485493869s)
helpers_test.go:175: Cleaning up "cert-expiration-736070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-736070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-736070: (1.069618027s)
--- PASS: TestCertExpiration (297.95s)

                                                
                                    
x
+
TestDockerFlags (133.05s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-368941 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-368941 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (2m11.570202499s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-368941 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-368941 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-368941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-368941
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-368941: (1.019008622s)
--- PASS: TestDockerFlags (133.05s)

                                                
                                    
x
+
TestForceSystemdFlag (51.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-652637 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-652637 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (50.778380086s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-652637 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-652637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-652637
--- PASS: TestForceSystemdFlag (51.97s)

                                                
                                    
x
+
TestForceSystemdEnv (67.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-312392 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-312392 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m5.807685486s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-312392 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-312392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-312392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-312392: (1.357324138s)
--- PASS: TestForceSystemdEnv (67.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.12s)

                                                
                                    
x
+
TestErrorSpam/setup (52.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-789682 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789682 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-789682 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789682 --driver=kvm2 : (52.449397034s)
--- PASS: TestErrorSpam/setup (52.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 unpause
--- PASS: TestErrorSpam/unpause (1.24s)

                                                
                                    
x
+
TestErrorSpam/stop (4.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 stop: (4.070815447s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789682 --log_dir /tmp/nospam-789682 stop
--- PASS: TestErrorSpam/stop (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/test/nested/copy/13213/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-068222 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m17.114802875s)
--- PASS: TestFunctional/serial/StartWithProxy (77.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-068222 --alsologtostderr -v=8: (36.238959937s)
functional_test.go:659: soft start took 36.239681828s for "functional-068222" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-068222 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-068222 /tmp/TestFunctionalserialCacheCmdcacheadd_local3406906222/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache add minikube-local-cache-test:functional-068222
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache delete minikube-local-cache-test:functional-068222
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-068222
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (236.941956ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 kubectl -- --context functional-068222 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-068222 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0925 10:41:19.415046   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.421095   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.431358   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.451633   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.491979   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.572351   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:19.732768   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:20.053384   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:20.694523   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:21.974976   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:24.536791   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:41:29.657390   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-068222 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.812730597s)
functional_test.go:757: restart took 41.812844075s for "functional-068222" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-068222 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 logs: (1.254969628s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 logs --file /tmp/TestFunctionalserialLogsFileCmd2976354469/001/logs.txt
E0925 10:41:39.898066   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 logs --file /tmp/TestFunctionalserialLogsFileCmd2976354469/001/logs.txt: (1.203076925s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-068222 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-068222
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-068222: exit status 115 (286.469218ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.161:30342 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-068222 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-068222 delete -f testdata/invalidsvc.yaml: (1.648499559s)
--- PASS: TestFunctional/serial/InvalidService (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 config get cpus: exit status 14 (39.369562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 config get cpus: exit status 14 (50.949922ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-068222 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-068222 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19536: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-068222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (150.865417ms)

                                                
                                                
-- stdout --
	* [functional-068222] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:41:59.987049   19034 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:41:59.987258   19034 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:41:59.987287   19034 out.go:309] Setting ErrFile to fd 2...
	I0925 10:41:59.987303   19034 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:41:59.987582   19034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 10:41:59.988221   19034 out.go:303] Setting JSON to false
	I0925 10:41:59.989323   19034 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1471,"bootTime":1695637049,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:41:59.989443   19034 start.go:138] virtualization: kvm guest
	I0925 10:41:59.991694   19034 out.go:177] * [functional-068222] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:41:59.993471   19034 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:41:59.993517   19034 notify.go:220] Checking for updates...
	I0925 10:41:59.995226   19034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:41:59.996778   19034 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 10:41:59.998303   19034 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 10:41:59.999652   19034 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:42:00.001412   19034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:42:00.003423   19034 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 10:42:00.004098   19034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:42:00.004196   19034 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:42:00.023824   19034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0925 10:42:00.024700   19034 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:42:00.025338   19034 main.go:141] libmachine: Using API Version  1
	I0925 10:42:00.025355   19034 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:42:00.025795   19034 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:42:00.025985   19034 main.go:141] libmachine: (functional-068222) Calling .DriverName
	I0925 10:42:00.026226   19034 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:42:00.026620   19034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:42:00.026655   19034 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:42:00.041623   19034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0925 10:42:00.042083   19034 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:42:00.042549   19034 main.go:141] libmachine: Using API Version  1
	I0925 10:42:00.042570   19034 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:42:00.042869   19034 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:42:00.043029   19034 main.go:141] libmachine: (functional-068222) Calling .DriverName
	I0925 10:42:00.077882   19034 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 10:42:00.079522   19034 start.go:298] selected driver: kvm2
	I0925 10:42:00.079539   19034 start.go:902] validating driver "kvm2" against &{Name:functional-068222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-068222 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:42:00.079653   19034 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:42:00.082363   19034 out.go:177] 
	W0925 10:42:00.083985   19034 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 10:42:00.085479   19034 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-068222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (144.693484ms)

                                                
                                                
-- stdout --
	* [functional-068222] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:41:59.830590   18991 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:41:59.830992   18991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:41:59.831004   18991 out.go:309] Setting ErrFile to fd 2...
	I0925 10:41:59.831012   18991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:41:59.831584   18991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 10:41:59.832356   18991 out.go:303] Setting JSON to false
	I0925 10:41:59.833412   18991 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1471,"bootTime":1695637049,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:41:59.833477   18991 start.go:138] virtualization: kvm guest
	I0925 10:41:59.835523   18991 out.go:177] * [functional-068222] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0925 10:41:59.837551   18991 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:41:59.837561   18991 notify.go:220] Checking for updates...
	I0925 10:41:59.839320   18991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:41:59.841043   18991 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	I0925 10:41:59.842664   18991 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	I0925 10:41:59.844163   18991 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:41:59.845789   18991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:41:59.847557   18991 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 10:41:59.848026   18991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:41:59.848084   18991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:41:59.865334   18991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0925 10:41:59.865826   18991 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:41:59.866368   18991 main.go:141] libmachine: Using API Version  1
	I0925 10:41:59.866397   18991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:41:59.866790   18991 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:41:59.867016   18991 main.go:141] libmachine: (functional-068222) Calling .DriverName
	I0925 10:41:59.867241   18991 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:41:59.867625   18991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:41:59.867664   18991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:41:59.887911   18991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39269
	I0925 10:41:59.888340   18991 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:41:59.888805   18991 main.go:141] libmachine: Using API Version  1
	I0925 10:41:59.888835   18991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:41:59.889254   18991 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:41:59.889484   18991 main.go:141] libmachine: (functional-068222) Calling .DriverName
	I0925 10:41:59.927257   18991 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0925 10:41:59.928856   18991 start.go:298] selected driver: kvm2
	I0925 10:41:59.928875   18991 start.go:902] validating driver "kvm2" against &{Name:functional-068222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-068222 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:41:59.928986   18991 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:41:59.931506   18991 out.go:177] 
	W0925 10:41:59.932956   18991 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 10:41:59.934775   18991 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-068222 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-068222 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-qfp8j" [e9be0060-401c-4e79-8331-d016b543f848] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-qfp8j" [e9be0060-401c-4e79-8331-d016b543f848] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.015115813s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.161:30666
functional_test.go:1674: http://192.168.39.161:30666: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-qfp8j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.161:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.161:30666
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9101175a-0db6-42c9-9f34-7b0a8696c015] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.024386854s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-068222 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-068222 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-068222 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068222 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f70fcde8-62bb-4644-9d17-23d506ed6824] Pending
helpers_test.go:344: "sp-pod" [f70fcde8-62bb-4644-9d17-23d506ed6824] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f70fcde8-62bb-4644-9d17-23d506ed6824] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.023000593s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-068222 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-068222 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-068222 delete -f testdata/storage-provisioner/pod.yaml: (1.413521563s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068222 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [47b37064-8848-44fb-ae2c-96b7f25994b3] Pending
helpers_test.go:344: "sp-pod" [47b37064-8848-44fb-ae2c-96b7f25994b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [47b37064-8848-44fb-ae2c-96b7f25994b3] Running
E0925 10:42:41.338647   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.029102048s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-068222 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh -n functional-068222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 cp functional-068222:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2881142220/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh -n functional-068222 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (44.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-068222 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-9nw5r" [9b3e026b-66c3-4696-adb5-66c379bf5927] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-9nw5r" [9b3e026b-66c3-4696-adb5-66c379bf5927] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 39.011997418s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;": exit status 1 (337.989592ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;": exit status 1 (172.599864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;": exit status 1 (151.318995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068222 exec mysql-859648c796-9nw5r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (44.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13213/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /etc/test/nested/copy/13213/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13213.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /etc/ssl/certs/13213.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13213.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /usr/share/ca-certificates/13213.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/132132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /etc/ssl/certs/132132.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/132132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /usr/share/ca-certificates/132132.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-068222 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh "sudo systemctl is-active crio": exit status 1 (262.415941ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-068222 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-068222 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nrcjg" [e786601b-7b45-4d78-a036-0e748cc91997] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nrcjg" [e786601b-7b45-4d78-a036-0e748cc91997] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.013722661s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "275.883027ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "40.733876ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "264.015767ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "49.779972ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdany-port737056630/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695638507704251666" to /tmp/TestFunctionalparallelMountCmdany-port737056630/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695638507704251666" to /tmp/TestFunctionalparallelMountCmdany-port737056630/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695638507704251666" to /tmp/TestFunctionalparallelMountCmdany-port737056630/001/test-1695638507704251666
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.357216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 test-1695638507704251666
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh cat /mount-9p/test-1695638507704251666
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-068222 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [032840c2-8149-43ae-96c8-91e010a8b2a7] Pending
helpers_test.go:344: "busybox-mount" [032840c2-8149-43ae-96c8-91e010a8b2a7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [032840c2-8149-43ae-96c8-91e010a8b2a7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [032840c2-8149-43ae-96c8-91e010a8b2a7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.012565116s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-068222 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdany-port737056630/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdspecific-port2822553185/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.770272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdspecific-port2822553185/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh "sudo umount -f /mount-9p": exit status 1 (215.423226ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-068222 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdspecific-port2822553185/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T" /mount1: exit status 1 (274.843351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-068222 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup26098525/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service list -o json
functional_test.go:1493: Took "311.607489ms" to run "out/minikube-linux-amd64 -p functional-068222 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service --namespace=default --https --url hello-node
E0925 10:42:00.378455   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
functional_test.go:1521: found endpoint: https://192.168.39.161:31872
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068222 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-068222
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-068222
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068222 image ls --format short --alsologtostderr:
I0925 10:42:21.792784   20222 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:21.792880   20222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:21.792891   20222 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:21.792896   20222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:21.793050   20222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 10:42:21.793570   20222 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:21.793657   20222 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:21.794032   20222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:21.794093   20222 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:21.814029   20222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
I0925 10:42:21.814507   20222 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:21.815139   20222 main.go:141] libmachine: Using API Version  1
I0925 10:42:21.815183   20222 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:21.815561   20222 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:21.815770   20222 main.go:141] libmachine: (functional-068222) Calling .GetState
I0925 10:42:21.817779   20222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:21.817810   20222 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:21.832058   20222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
I0925 10:42:21.832500   20222 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:21.832968   20222 main.go:141] libmachine: Using API Version  1
I0925 10:42:21.832984   20222 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:21.833255   20222 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:21.833437   20222 main.go:141] libmachine: (functional-068222) Calling .DriverName
I0925 10:42:21.833648   20222 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:21.833681   20222 main.go:141] libmachine: (functional-068222) Calling .GetSSHHostname
I0925 10:42:21.836587   20222 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:21.836959   20222 main.go:141] libmachine: (functional-068222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b3:12", ip: ""} in network mk-functional-068222: {Iface:virbr1 ExpiryTime:2023-09-25 11:39:13 +0000 UTC Type:0 Mac:52:54:00:d1:b3:12 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-068222 Clientid:01:52:54:00:d1:b3:12}
I0925 10:42:21.836996   20222 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined IP address 192.168.39.161 and MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:21.837220   20222 main.go:141] libmachine: (functional-068222) Calling .GetSSHPort
I0925 10:42:21.837387   20222 main.go:141] libmachine: (functional-068222) Calling .GetSSHKeyPath
I0925 10:42:21.837505   20222 main.go:141] libmachine: (functional-068222) Calling .GetSSHUsername
I0925 10:42:21.837616   20222 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/functional-068222/id_rsa Username:docker}
I0925 10:42:21.940172   20222 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 10:42:22.008165   20222 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.008200   20222 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.008521   20222 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.008545   20222 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:22.008557   20222 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.008572   20222 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.008824   20222 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.008843   20222 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068222 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-068222 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/library/minikube-local-cache-test | functional-068222 | 83f9101b885db | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068222 image ls --format table --alsologtostderr:
I0925 10:42:22.288058   20318 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:22.288358   20318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.288368   20318 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:22.288374   20318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.288646   20318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 10:42:22.289455   20318 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.289584   20318 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.290085   20318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.290182   20318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.308005   20318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
I0925 10:42:22.308501   20318 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.309128   20318 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.309156   20318 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.309588   20318 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.309815   20318 main.go:141] libmachine: (functional-068222) Calling .GetState
I0925 10:42:22.312037   20318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.312087   20318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.327486   20318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
I0925 10:42:22.327984   20318 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.328545   20318 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.328582   20318 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.329011   20318 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.329200   20318 main.go:141] libmachine: (functional-068222) Calling .DriverName
I0925 10:42:22.329449   20318 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:22.329481   20318 main.go:141] libmachine: (functional-068222) Calling .GetSSHHostname
I0925 10:42:22.332755   20318 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.333311   20318 main.go:141] libmachine: (functional-068222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b3:12", ip: ""} in network mk-functional-068222: {Iface:virbr1 ExpiryTime:2023-09-25 11:39:13 +0000 UTC Type:0 Mac:52:54:00:d1:b3:12 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-068222 Clientid:01:52:54:00:d1:b3:12}
I0925 10:42:22.333348   20318 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined IP address 192.168.39.161 and MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.333648   20318 main.go:141] libmachine: (functional-068222) Calling .GetSSHPort
I0925 10:42:22.333848   20318 main.go:141] libmachine: (functional-068222) Calling .GetSSHKeyPath
I0925 10:42:22.333993   20318 main.go:141] libmachine: (functional-068222) Calling .GetSSHUsername
I0925 10:42:22.334156   20318 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/functional-068222/id_rsa Username:docker}
I0925 10:42:22.439607   20318 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 10:42:22.482888   20318 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.482910   20318 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.483194   20318 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.483218   20318 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:22.483229   20318 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.483237   20318 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.483490   20318 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.483509   20318 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:22.483528   20318 main.go:141] libmachine: (functional-068222) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068222 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc
959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"83f9101b885db6ef31d303b07739f6e37fa398df8aad86bf9617501f3af35eb6","repoDigests":[],"repoTags":["docker.io/library/mi
nikube-local-cache-test:functional-068222"],"size":"30"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-068222"],"size":"32900000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068222 image ls --format json --alsologtostderr:
I0925 10:42:22.033317   20268 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:22.033585   20268 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.033594   20268 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:22.033599   20268 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.033799   20268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 10:42:22.034550   20268 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.034687   20268 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.035229   20268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.035299   20268 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.051010   20268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35723
I0925 10:42:22.051513   20268 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.052148   20268 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.052168   20268 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.052545   20268 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.052785   20268 main.go:141] libmachine: (functional-068222) Calling .GetState
I0925 10:42:22.054951   20268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.054989   20268 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.069046   20268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
I0925 10:42:22.069479   20268 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.069958   20268 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.069989   20268 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.070310   20268 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.070512   20268 main.go:141] libmachine: (functional-068222) Calling .DriverName
I0925 10:42:22.070767   20268 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:22.071041   20268 main.go:141] libmachine: (functional-068222) Calling .GetSSHHostname
I0925 10:42:22.074325   20268 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.074780   20268 main.go:141] libmachine: (functional-068222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b3:12", ip: ""} in network mk-functional-068222: {Iface:virbr1 ExpiryTime:2023-09-25 11:39:13 +0000 UTC Type:0 Mac:52:54:00:d1:b3:12 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-068222 Clientid:01:52:54:00:d1:b3:12}
I0925 10:42:22.074814   20268 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined IP address 192.168.39.161 and MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.075045   20268 main.go:141] libmachine: (functional-068222) Calling .GetSSHPort
I0925 10:42:22.075266   20268 main.go:141] libmachine: (functional-068222) Calling .GetSSHKeyPath
I0925 10:42:22.075423   20268 main.go:141] libmachine: (functional-068222) Calling .GetSSHUsername
I0925 10:42:22.075551   20268 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/functional-068222/id_rsa Username:docker}
I0925 10:42:22.182741   20268 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 10:42:22.236158   20268 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.236175   20268 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.236484   20268 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.236502   20268 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:22.236510   20268 main.go:141] libmachine: Making call to close driver server
I0925 10:42:22.236519   20268 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:22.236751   20268 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:22.236766   20268 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068222 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-068222
size: "32900000"
- id: 83f9101b885db6ef31d303b07739f6e37fa398df8aad86bf9617501f3af35eb6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-068222
size: "30"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068222 image ls --format yaml --alsologtostderr:
I0925 10:42:21.799380   20223 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:21.799501   20223 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:21.799513   20223 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:21.799520   20223 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:21.799843   20223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 10:42:21.800608   20223 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:21.800772   20223 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:21.801288   20223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:21.801357   20223 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:21.815601   20223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
I0925 10:42:21.816173   20223 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:21.816632   20223 main.go:141] libmachine: Using API Version  1
I0925 10:42:21.816653   20223 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:21.817215   20223 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:21.817423   20223 main.go:141] libmachine: (functional-068222) Calling .GetState
I0925 10:42:21.819497   20223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:21.819537   20223 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:21.833294   20223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
I0925 10:42:21.833650   20223 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:21.834092   20223 main.go:141] libmachine: Using API Version  1
I0925 10:42:21.834117   20223 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:21.834635   20223 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:21.834915   20223 main.go:141] libmachine: (functional-068222) Calling .DriverName
I0925 10:42:21.835116   20223 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:21.835145   20223 main.go:141] libmachine: (functional-068222) Calling .GetSSHHostname
I0925 10:42:21.838407   20223 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:21.838743   20223 main.go:141] libmachine: (functional-068222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b3:12", ip: ""} in network mk-functional-068222: {Iface:virbr1 ExpiryTime:2023-09-25 11:39:13 +0000 UTC Type:0 Mac:52:54:00:d1:b3:12 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-068222 Clientid:01:52:54:00:d1:b3:12}
I0925 10:42:21.838791   20223 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined IP address 192.168.39.161 and MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:21.838910   20223 main.go:141] libmachine: (functional-068222) Calling .GetSSHPort
I0925 10:42:21.839072   20223 main.go:141] libmachine: (functional-068222) Calling .GetSSHKeyPath
I0925 10:42:21.839214   20223 main.go:141] libmachine: (functional-068222) Calling .GetSSHUsername
I0925 10:42:21.839343   20223 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/functional-068222/id_rsa Username:docker}
I0925 10:42:21.942486   20223 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 10:42:21.984517   20223 main.go:141] libmachine: Making call to close driver server
I0925 10:42:21.984538   20223 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:21.984863   20223 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:21.984891   20223 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:21.984910   20223 main.go:141] libmachine: Making call to close driver server
I0925 10:42:21.984923   20223 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:21.985158   20223 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:21.985183   20223 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068222 ssh pgrep buildkitd: exit status 1 (229.753095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image build -t localhost/my-image:functional-068222 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image build -t localhost/my-image:functional-068222 testdata/build --alsologtostderr: (3.115837869s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068222 image build -t localhost/my-image:functional-068222 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 1ac4f79d3f9e
Removing intermediate container 1ac4f79d3f9e
---> 7ffba0f8ba19
Step 3/3 : ADD content.txt /
---> 016e4eaf2916
Successfully built 016e4eaf2916
Successfully tagged localhost/my-image:functional-068222
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068222 image build -t localhost/my-image:functional-068222 testdata/build --alsologtostderr:
I0925 10:42:22.294991   20319 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:22.295336   20319 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.295351   20319 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:22.295358   20319 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:22.295626   20319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 10:42:22.296453   20319 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.297108   20319 config.go:182] Loaded profile config "functional-068222": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 10:42:22.297662   20319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.297779   20319 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.312435   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
I0925 10:42:22.312890   20319 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.313482   20319 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.313510   20319 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.313949   20319 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.314177   20319 main.go:141] libmachine: (functional-068222) Calling .GetState
I0925 10:42:22.316283   20319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 10:42:22.316326   20319 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 10:42:22.330645   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
I0925 10:42:22.331084   20319 main.go:141] libmachine: () Calling .GetVersion
I0925 10:42:22.331764   20319 main.go:141] libmachine: Using API Version  1
I0925 10:42:22.331784   20319 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 10:42:22.332151   20319 main.go:141] libmachine: () Calling .GetMachineName
I0925 10:42:22.332358   20319 main.go:141] libmachine: (functional-068222) Calling .DriverName
I0925 10:42:22.332589   20319 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:22.332615   20319 main.go:141] libmachine: (functional-068222) Calling .GetSSHHostname
I0925 10:42:22.335706   20319 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.336098   20319 main.go:141] libmachine: (functional-068222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b3:12", ip: ""} in network mk-functional-068222: {Iface:virbr1 ExpiryTime:2023-09-25 11:39:13 +0000 UTC Type:0 Mac:52:54:00:d1:b3:12 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-068222 Clientid:01:52:54:00:d1:b3:12}
I0925 10:42:22.336142   20319 main.go:141] libmachine: (functional-068222) DBG | domain functional-068222 has defined IP address 192.168.39.161 and MAC address 52:54:00:d1:b3:12 in network mk-functional-068222
I0925 10:42:22.336195   20319 main.go:141] libmachine: (functional-068222) Calling .GetSSHPort
I0925 10:42:22.336376   20319 main.go:141] libmachine: (functional-068222) Calling .GetSSHKeyPath
I0925 10:42:22.336542   20319 main.go:141] libmachine: (functional-068222) Calling .GetSSHUsername
I0925 10:42:22.336698   20319 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/functional-068222/id_rsa Username:docker}
I0925 10:42:22.464428   20319 build_images.go:151] Building image from path: /tmp/build.3597295141.tar
I0925 10:42:22.464498   20319 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 10:42:22.485084   20319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3597295141.tar
I0925 10:42:22.499391   20319 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3597295141.tar: stat -c "%s %y" /var/lib/minikube/build/build.3597295141.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3597295141.tar': No such file or directory
I0925 10:42:22.499426   20319 ssh_runner.go:362] scp /tmp/build.3597295141.tar --> /var/lib/minikube/build/build.3597295141.tar (3072 bytes)
I0925 10:42:22.527371   20319 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3597295141
I0925 10:42:22.540986   20319 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3597295141 -xf /var/lib/minikube/build/build.3597295141.tar
I0925 10:42:22.562218   20319 docker.go:340] Building image: /var/lib/minikube/build/build.3597295141
I0925 10:42:22.562283   20319 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-068222 /var/lib/minikube/build/build.3597295141
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0925 10:42:25.334546   20319 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-068222 /var/lib/minikube/build/build.3597295141: (2.772244736s)
I0925 10:42:25.334603   20319 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3597295141
I0925 10:42:25.344032   20319 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3597295141.tar
I0925 10:42:25.353133   20319 build_images.go:207] Built localhost/my-image:functional-068222 from /tmp/build.3597295141.tar
I0925 10:42:25.353162   20319 build_images.go:123] succeeded building to: functional-068222
I0925 10:42:25.353166   20319 build_images.go:124] failed building to: 
I0925 10:42:25.353186   20319 main.go:141] libmachine: Making call to close driver server
I0925 10:42:25.353195   20319 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:25.353476   20319 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:25.353495   20319 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 10:42:25.353499   20319 main.go:141] libmachine: (functional-068222) DBG | Closing plugin on server side
I0925 10:42:25.353513   20319 main.go:141] libmachine: Making call to close driver server
I0925 10:42:25.353524   20319 main.go:141] libmachine: (functional-068222) Calling .Close
I0925 10:42:25.353798   20319 main.go:141] libmachine: (functional-068222) DBG | Closing plugin on server side
I0925 10:42:25.353831   20319 main.go:141] libmachine: Successfully made call to close driver server
I0925 10:42:25.353845   20319 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.597987999s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-068222
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.161:31872
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-068222 docker-env) && out/minikube-linux-amd64 status -p functional-068222"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-068222 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr: (4.430862761s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr: (2.316748551s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.321838665s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-068222
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image load --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr: (3.784023686s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image save gcr.io/google-containers/addon-resizer:functional-068222 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
2023/09/25 10:42:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image save gcr.io/google-containers/addon-resizer:functional-068222 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.226591415s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image rm gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.624649499s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-068222
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-068222 image save --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-068222 image save --daemon gcr.io/google-containers/addon-resizer:functional-068222 --alsologtostderr: (1.994760205s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-068222
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.03s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-068222
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-068222
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-068222
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (279.64s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-531432 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-531432 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m19.795900905s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-531432 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-531432 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.281290742s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-531432 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-531432 addons enable gvisor: (3.54024551s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [50df409b-827b-4173-90a2-9b45408603d5] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.056135254s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-531432 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [07c3ef71-23c6-4267-99f8-6f47c07b6cb7] Pending
helpers_test.go:344: "nginx-gvisor" [07c3ef71-23c6-4267-99f8-6f47c07b6cb7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [07c3ef71-23c6-4267-99f8-6f47c07b6cb7] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 12.059766344s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-531432
E0925 11:13:16.449001   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.454306   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.464565   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.484856   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.525266   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.605542   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:16.765878   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:17.086905   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-531432: (1m33.066124653s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-531432 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-531432 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (50.718231532s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [50df409b-827b-4173-90a2-9b45408603d5] Running
helpers_test.go:344: "gvisor" [50df409b-827b-4173-90a2-9b45408603d5] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [50df409b-827b-4173-90a2-9b45408603d5] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.032510218s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [07c3ef71-23c6-4267-99f8-6f47c07b6cb7] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.266865658s
helpers_test.go:175: Cleaning up "gvisor-531432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-531432
E0925 11:15:25.175814   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-531432: (1.501384741s)
--- PASS: TestGvisorAddon (279.64s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (53.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-869550 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-869550 --driver=kvm2 : (53.63315266s)
--- PASS: TestImageBuild/serial/Setup (53.63s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-869550
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-869550: (1.584089312s)
--- PASS: TestImageBuild/serial/NormalBuild (1.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869550
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869550: (1.266775386s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-869550
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-869550
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-303206 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0925 10:44:03.259301   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-303206 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m18.063908086s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons enable ingress --alsologtostderr -v=5: (18.435236925s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (33.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-303206 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-303206 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.408805348s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-303206 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-303206 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [19ab9eab-f3a6-41db-ae5a-bf2c40af43d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [19ab9eab-f3a6-41db-ae5a-bf2c40af43d1] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.02414251s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-303206 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.78
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons disable ingress-dns --alsologtostderr -v=1: (2.151294374s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-303206 addons disable ingress --alsologtostderr -v=1: (7.528472346s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (33.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-122046 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0925 10:46:19.413925   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:46:46.064823   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.070129   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.080435   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.100726   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.141011   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.221394   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.381852   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:46.702412   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:47.100024   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:46:47.343296   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:48.624303   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:51.186062   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:46:56.306863   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-122046 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m4.181521269s)
--- PASS: TestJSONOutput/start/Command (64.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-122046 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-122046 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-122046 --output=json --user=testUser
E0925 10:47:06.547445   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-122046 --output=json --user=testUser: (8.088549555s)
--- PASS: TestJSONOutput/stop/Command (8.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-278313 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-278313 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.892115ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4f3e78f-ca2d-40d7-adc4-07f4246354bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-278313] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ea874fe-d756-42bb-8774-2df98c21e992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"79d16d26-66b8-4550-bb20-08c125511f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d6fa27c-6872-46c3-8a33-aacaada41568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig"}}
	{"specversion":"1.0","id":"322f5029-6bfc-494c-8f09-d4be539c7b39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube"}}
	{"specversion":"1.0","id":"8ef55e5d-1073-4887-9b5e-2c0cc355820a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"203858e5-ed75-4c7c-9244-f65e201054c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fdf8063e-e56d-4fa5-8dfb-9e6b2db093b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-278313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-278313
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-406674 --driver=kvm2 
E0925 10:47:27.028289   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-406674 --driver=kvm2 : (48.049671879s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-413810 --driver=kvm2 
E0925 10:48:07.989470   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-413810 --driver=kvm2 : (53.130168455s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-406674
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-413810
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-413810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-413810
helpers_test.go:175: Cleaning up "first-406674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-406674
--- PASS: TestMinikubeProfile (103.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-257825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-257825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.24495185s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-257825 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-257825 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-284651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0925 10:49:29.910275   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-284651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.729770441s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-257825 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-284651
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-284651: (2.073723523s)
--- PASS: TestMountStart/serial/Stop (2.07s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-284651
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-284651: (23.75273218s)
E0925 10:50:25.175809   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.181070   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.191272   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.211599   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.251940   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.332250   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.492732   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:25.813332   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284651 ssh -- mount | grep 9p
E0925 10:50:26.454328   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-521056 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0925 10:50:27.735045   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:30.296075   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:35.417152   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:50:45.657905   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:51:06.138610   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:51:19.413516   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:51:46.064360   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 10:51:47.099610   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:52:13.750500   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-521056 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m0.599866493s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-521056 -- rollout status deployment/busybox: (4.414659538s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-9tr5s -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-kcqqk -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-9tr5s -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-kcqqk -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-9tr5s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-kcqqk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-9tr5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-9tr5s -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-kcqqk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-521056 -- exec busybox-5bc68d56bd-kcqqk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-521056 -v 3 --alsologtostderr
E0925 10:53:09.020768   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-521056 -v 3 --alsologtostderr: (45.166870059s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.77s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp testdata/cp-test.txt multinode-521056:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816392003/001/cp-test_multinode-521056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056:/home/docker/cp-test.txt multinode-521056-m02:/home/docker/cp-test_multinode-521056_multinode-521056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test_multinode-521056_multinode-521056-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056:/home/docker/cp-test.txt multinode-521056-m03:/home/docker/cp-test_multinode-521056_multinode-521056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test_multinode-521056_multinode-521056-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp testdata/cp-test.txt multinode-521056-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816392003/001/cp-test_multinode-521056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m02:/home/docker/cp-test.txt multinode-521056:/home/docker/cp-test_multinode-521056-m02_multinode-521056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test_multinode-521056-m02_multinode-521056.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m02:/home/docker/cp-test.txt multinode-521056-m03:/home/docker/cp-test_multinode-521056-m02_multinode-521056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test_multinode-521056-m02_multinode-521056-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp testdata/cp-test.txt multinode-521056-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816392003/001/cp-test_multinode-521056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m03:/home/docker/cp-test.txt multinode-521056:/home/docker/cp-test_multinode-521056-m03_multinode-521056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056 "sudo cat /home/docker/cp-test_multinode-521056-m03_multinode-521056.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 cp multinode-521056-m03:/home/docker/cp-test.txt multinode-521056-m02:/home/docker/cp-test_multinode-521056-m03_multinode-521056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 ssh -n multinode-521056-m02 "sudo cat /home/docker/cp-test_multinode-521056-m03_multinode-521056-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-521056 node stop m03: (3.078382768s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-521056 status: exit status 7 (434.209906ms)

                                                
                                                
-- stdout --
	multinode-521056
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-521056-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-521056-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr: exit status 7 (427.58752ms)

                                                
                                                
-- stdout --
	multinode-521056
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-521056-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-521056-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:53:32.295915   27304 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:53:32.296267   27304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:53:32.296280   27304 out.go:309] Setting ErrFile to fd 2...
	I0925 10:53:32.296287   27304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:53:32.296474   27304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 10:53:32.296678   27304 out.go:303] Setting JSON to false
	I0925 10:53:32.296721   27304 mustload.go:65] Loading cluster: multinode-521056
	I0925 10:53:32.296824   27304 notify.go:220] Checking for updates...
	I0925 10:53:32.297116   27304 config.go:182] Loaded profile config "multinode-521056": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 10:53:32.297132   27304 status.go:255] checking status of multinode-521056 ...
	I0925 10:53:32.297518   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.297597   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.312645   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0925 10:53:32.313103   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.313642   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.313663   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.314053   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.314259   27304 main.go:141] libmachine: (multinode-521056) Calling .GetState
	I0925 10:53:32.316232   27304 status.go:330] multinode-521056 host status = "Running" (err=<nil>)
	I0925 10:53:32.316261   27304 host.go:66] Checking if "multinode-521056" exists ...
	I0925 10:53:32.316600   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.316629   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.331690   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0925 10:53:32.332101   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.332997   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.333032   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.333478   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.333817   27304 main.go:141] libmachine: (multinode-521056) Calling .GetIP
	I0925 10:53:32.336616   27304 main.go:141] libmachine: (multinode-521056) DBG | domain multinode-521056 has defined MAC address 52:54:00:e5:d2:87 in network mk-multinode-521056
	I0925 10:53:32.337081   27304 main.go:141] libmachine: (multinode-521056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:d2:87", ip: ""} in network mk-multinode-521056: {Iface:virbr1 ExpiryTime:2023-09-25 11:50:43 +0000 UTC Type:0 Mac:52:54:00:e5:d2:87 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-521056 Clientid:01:52:54:00:e5:d2:87}
	I0925 10:53:32.337111   27304 main.go:141] libmachine: (multinode-521056) DBG | domain multinode-521056 has defined IP address 192.168.39.81 and MAC address 52:54:00:e5:d2:87 in network mk-multinode-521056
	I0925 10:53:32.337225   27304 host.go:66] Checking if "multinode-521056" exists ...
	I0925 10:53:32.337528   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.337555   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.352163   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0925 10:53:32.352554   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.353050   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.353077   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.353410   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.353618   27304 main.go:141] libmachine: (multinode-521056) Calling .DriverName
	I0925 10:53:32.353814   27304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:53:32.353847   27304 main.go:141] libmachine: (multinode-521056) Calling .GetSSHHostname
	I0925 10:53:32.356789   27304 main.go:141] libmachine: (multinode-521056) DBG | domain multinode-521056 has defined MAC address 52:54:00:e5:d2:87 in network mk-multinode-521056
	I0925 10:53:32.357262   27304 main.go:141] libmachine: (multinode-521056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:d2:87", ip: ""} in network mk-multinode-521056: {Iface:virbr1 ExpiryTime:2023-09-25 11:50:43 +0000 UTC Type:0 Mac:52:54:00:e5:d2:87 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-521056 Clientid:01:52:54:00:e5:d2:87}
	I0925 10:53:32.357313   27304 main.go:141] libmachine: (multinode-521056) DBG | domain multinode-521056 has defined IP address 192.168.39.81 and MAC address 52:54:00:e5:d2:87 in network mk-multinode-521056
	I0925 10:53:32.357415   27304 main.go:141] libmachine: (multinode-521056) Calling .GetSSHPort
	I0925 10:53:32.357586   27304 main.go:141] libmachine: (multinode-521056) Calling .GetSSHKeyPath
	I0925 10:53:32.357772   27304 main.go:141] libmachine: (multinode-521056) Calling .GetSSHUsername
	I0925 10:53:32.357918   27304 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/multinode-521056/id_rsa Username:docker}
	I0925 10:53:32.440815   27304 ssh_runner.go:195] Run: systemctl --version
	I0925 10:53:32.446522   27304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:53:32.460360   27304 kubeconfig.go:92] found "multinode-521056" server: "https://192.168.39.81:8443"
	I0925 10:53:32.460387   27304 api_server.go:166] Checking apiserver status ...
	I0925 10:53:32.460428   27304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 10:53:32.473923   27304 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1847/cgroup
	I0925 10:53:32.484951   27304 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podae161e50e847ed8d6a0d3a640e4d6ee5/6f808c538eb105b643672b37ba7ca5636d3714784b906ddff11749116a427b39"
	I0925 10:53:32.485013   27304 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podae161e50e847ed8d6a0d3a640e4d6ee5/6f808c538eb105b643672b37ba7ca5636d3714784b906ddff11749116a427b39/freezer.state
	I0925 10:53:32.495796   27304 api_server.go:204] freezer state: "THAWED"
	I0925 10:53:32.495822   27304 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0925 10:53:32.501238   27304 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0925 10:53:32.501269   27304 status.go:421] multinode-521056 apiserver status = Running (err=<nil>)
	I0925 10:53:32.501278   27304 status.go:257] multinode-521056 status: &{Name:multinode-521056 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:53:32.501295   27304 status.go:255] checking status of multinode-521056-m02 ...
	I0925 10:53:32.501581   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.501605   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.517287   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I0925 10:53:32.517683   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.518183   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.518199   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.518568   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.518794   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetState
	I0925 10:53:32.520369   27304 status.go:330] multinode-521056-m02 host status = "Running" (err=<nil>)
	I0925 10:53:32.520384   27304 host.go:66] Checking if "multinode-521056-m02" exists ...
	I0925 10:53:32.520779   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.520815   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.535874   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0925 10:53:32.536290   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.536786   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.536805   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.537103   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.537285   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetIP
	I0925 10:53:32.539735   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | domain multinode-521056-m02 has defined MAC address 52:54:00:25:67:1d in network mk-multinode-521056
	I0925 10:53:32.540112   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:67:1d", ip: ""} in network mk-multinode-521056: {Iface:virbr1 ExpiryTime:2023-09-25 11:51:59 +0000 UTC Type:0 Mac:52:54:00:25:67:1d Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-521056-m02 Clientid:01:52:54:00:25:67:1d}
	I0925 10:53:32.540154   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | domain multinode-521056-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:25:67:1d in network mk-multinode-521056
	I0925 10:53:32.540278   27304 host.go:66] Checking if "multinode-521056-m02" exists ...
	I0925 10:53:32.540598   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.540639   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.556133   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0925 10:53:32.556534   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.557084   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.557107   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.557428   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.557602   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .DriverName
	I0925 10:53:32.557815   27304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:53:32.557837   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetSSHHostname
	I0925 10:53:32.560411   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | domain multinode-521056-m02 has defined MAC address 52:54:00:25:67:1d in network mk-multinode-521056
	I0925 10:53:32.560858   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:67:1d", ip: ""} in network mk-multinode-521056: {Iface:virbr1 ExpiryTime:2023-09-25 11:51:59 +0000 UTC Type:0 Mac:52:54:00:25:67:1d Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-521056-m02 Clientid:01:52:54:00:25:67:1d}
	I0925 10:53:32.560895   27304 main.go:141] libmachine: (multinode-521056-m02) DBG | domain multinode-521056-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:25:67:1d in network mk-multinode-521056
	I0925 10:53:32.561071   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetSSHPort
	I0925 10:53:32.561248   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetSSHKeyPath
	I0925 10:53:32.561425   27304 main.go:141] libmachine: (multinode-521056-m02) Calling .GetSSHUsername
	I0925 10:53:32.561570   27304 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/multinode-521056-m02/id_rsa Username:docker}
	I0925 10:53:32.652755   27304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:53:32.666244   27304 status.go:257] multinode-521056-m02 status: &{Name:multinode-521056-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:53:32.666283   27304 status.go:255] checking status of multinode-521056-m03 ...
	I0925 10:53:32.666608   27304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:53:32.666635   27304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:53:32.681267   27304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I0925 10:53:32.681709   27304 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:53:32.682190   27304 main.go:141] libmachine: Using API Version  1
	I0925 10:53:32.682212   27304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:53:32.682501   27304 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:53:32.682707   27304 main.go:141] libmachine: (multinode-521056-m03) Calling .GetState
	I0925 10:53:32.684431   27304 status.go:330] multinode-521056-m03 host status = "Stopped" (err=<nil>)
	I0925 10:53:32.684449   27304 status.go:343] host is not running, skipping remaining checks
	I0925 10:53:32.684456   27304 status.go:257] multinode-521056-m03 status: &{Name:multinode-521056-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.94s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-521056 node start m03 --alsologtostderr: (31.693055463s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-521056
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-521056
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-521056: (29.175852246s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-521056 --wait=true -v=8 --alsologtostderr
E0925 10:55:25.175826   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:55:52.861271   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 10:56:19.413897   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 10:56:46.064303   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-521056 --wait=true -v=8 --alsologtostderr: (2m25.726080223s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-521056
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-521056 node delete m03: (1.196836155s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-521056 stop: (25.47114074s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-521056 status: exit status 7 (70.101885ms)

                                                
                                                
-- stdout --
	multinode-521056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-521056-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr: exit status 7 (70.872209ms)

                                                
                                                
-- stdout --
	multinode-521056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-521056-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:57:27.271166   28714 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:57:27.271273   28714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:57:27.271282   28714 out.go:309] Setting ErrFile to fd 2...
	I0925 10:57:27.271286   28714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:57:27.271465   28714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
	I0925 10:57:27.271648   28714 out.go:303] Setting JSON to false
	I0925 10:57:27.271675   28714 mustload.go:65] Loading cluster: multinode-521056
	I0925 10:57:27.271780   28714 notify.go:220] Checking for updates...
	I0925 10:57:27.272196   28714 config.go:182] Loaded profile config "multinode-521056": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 10:57:27.272216   28714 status.go:255] checking status of multinode-521056 ...
	I0925 10:57:27.273105   28714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:57:27.273187   28714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:57:27.287258   28714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0925 10:57:27.287680   28714 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:57:27.288170   28714 main.go:141] libmachine: Using API Version  1
	I0925 10:57:27.288201   28714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:57:27.288528   28714 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:57:27.288748   28714 main.go:141] libmachine: (multinode-521056) Calling .GetState
	I0925 10:57:27.290233   28714 status.go:330] multinode-521056 host status = "Stopped" (err=<nil>)
	I0925 10:57:27.290248   28714 status.go:343] host is not running, skipping remaining checks
	I0925 10:57:27.290255   28714 status.go:257] multinode-521056 status: &{Name:multinode-521056 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:57:27.290281   28714 status.go:255] checking status of multinode-521056-m02 ...
	I0925 10:57:27.290531   28714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 10:57:27.290567   28714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 10:57:27.304172   28714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0925 10:57:27.304483   28714 main.go:141] libmachine: () Calling .GetVersion
	I0925 10:57:27.304851   28714 main.go:141] libmachine: Using API Version  1
	I0925 10:57:27.304869   28714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 10:57:27.305134   28714 main.go:141] libmachine: () Calling .GetMachineName
	I0925 10:57:27.305276   28714 main.go:141] libmachine: (multinode-521056-m02) Calling .GetState
	I0925 10:57:27.306695   28714 status.go:330] multinode-521056-m02 host status = "Stopped" (err=<nil>)
	I0925 10:57:27.306705   28714 status.go:343] host is not running, skipping remaining checks
	I0925 10:57:27.306710   28714 status.go:257] multinode-521056-m02 status: &{Name:multinode-521056-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (134.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-521056 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0925 10:57:42.462507   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-521056 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m13.944169591s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-521056 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (134.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-521056
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-521056-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-521056-m02 --driver=kvm2 : exit status 14 (53.73618ms)

                                                
                                                
-- stdout --
	* [multinode-521056-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-521056-m02' is duplicated with machine name 'multinode-521056-m02' in profile 'multinode-521056'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-521056-m03 --driver=kvm2 
E0925 11:00:25.175690   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-521056-m03 --driver=kvm2 : (53.115677093s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-521056
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-521056: exit status 80 (222.782271ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-521056
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-521056-m03 already exists in multinode-521056-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-521056-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.42s)

                                                
                                    
x
+
TestPreload (207.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0925 11:01:19.414446   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:01:46.064782   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m6.573008674s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443932 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-443932 image pull gcr.io/k8s-minikube/busybox: (1.30091673s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-443932
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-443932: (13.088496909s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443932 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0925 11:03:09.111643   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443932 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m5.582566272s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443932 image list
helpers_test.go:175: Cleaning up "test-preload-443932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-443932
--- PASS: TestPreload (207.57s)

                                                
                                    
x
+
TestScheduledStopUnix (123.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-571915 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-571915 --memory=2048 --driver=kvm2 : (51.669301153s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-571915 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-571915 -n scheduled-stop-571915
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-571915 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-571915 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-571915 -n scheduled-stop-571915
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-571915
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-571915 --schedule 15s
E0925 11:05:25.175644   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-571915
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-571915: exit status 7 (53.639204ms)

                                                
                                                
-- stdout --
	scheduled-stop-571915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-571915 -n scheduled-stop-571915
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-571915 -n scheduled-stop-571915: exit status 7 (50.641661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-571915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-571915
--- PASS: TestScheduledStopUnix (123.13s)

                                                
                                    
x
+
TestSkaffold (139.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2423682490 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-331094 --memory=2600 --driver=kvm2 
E0925 11:06:19.413953   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:06:46.064714   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 11:06:48.222177   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-331094 --memory=2600 --driver=kvm2 : (49.587161119s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2423682490 run --minikube-profile skaffold-331094 --kube-context skaffold-331094 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2423682490 run --minikube-profile skaffold-331094 --kube-context skaffold-331094 --status-check=true --port-forward=false --interactive=false: (1m17.561062547s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-59cfb6cdf6-mc9hn" [37f779b4-9011-4e01-a43a-42eaa6a1f8f4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.018053938s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-b766c76cd-tthg8" [13b42b00-90ad-49b6-9204-852524441fd8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.013319202s
helpers_test.go:175: Cleaning up "skaffold-331094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-331094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-331094: (1.183370137s)
--- PASS: TestSkaffold (139.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (197.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.4187451505.exe start -p running-upgrade-992151 --memory=2200 --vm-driver=kvm2 
E0925 11:11:19.413469   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:11:46.065049   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.4187451505.exe start -p running-upgrade-992151 --memory=2200 --vm-driver=kvm2 : (2m5.694542465s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-992151 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0925 11:13:17.727514   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:19.007988   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:13:21.568247   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-992151 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m9.874170048s)
helpers_test.go:175: Cleaning up "running-upgrade-992151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-992151
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-992151: (1.730002145s)
--- PASS: TestRunningBinaryUpgrade (197.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (205.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m16.70309261s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-470707
E0925 11:13:36.929223   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-470707: (13.113447409s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-470707 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-470707 status --format={{.Host}}: exit status 7 (71.522838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
E0925 11:13:57.409819   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
E0925 11:14:22.463518   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (49.195237948s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-470707 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (91.636731ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470707] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-470707
	    minikube start -p kubernetes-upgrade-470707 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4707072 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-470707 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
E0925 11:14:38.370117   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-470707 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m4.880027867s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-470707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-470707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-470707: (1.163828105s)
--- PASS: TestKubernetesUpgrade (205.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (205.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3500901707.exe start -p stopped-upgrade-654934 --memory=2200 --vm-driver=kvm2 
E0925 11:13:26.688394   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3500901707.exe start -p stopped-upgrade-654934 --memory=2200 --vm-driver=kvm2 : (1m50.515892657s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3500901707.exe -p stopped-upgrade-654934 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3500901707.exe -p stopped-upgrade-654934 stop: (13.624684362s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-654934 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-654934 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m21.211081514s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (205.35s)

                                                
                                    
x
+
TestPause/serial/Start (94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-329196 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-329196 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m34.001364006s)
--- PASS: TestPause/serial/Start (94.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (59.155813ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-882146] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-882146 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-882146 --driver=kvm2 : (1m0.165398707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-882146 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (123.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0925 11:16:00.290656   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m3.961670508s)
--- PASS: TestNetworkPlugins/group/auto/Start (123.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (91.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-329196 --alsologtostderr -v=1 --driver=kvm2 
E0925 11:16:19.413484   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-329196 --alsologtostderr -v=1 --driver=kvm2 : (1m31.654692565s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (91.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --driver=kvm2 
E0925 11:16:46.064904   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --driver=kvm2 : (41.260702289s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-882146 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-882146 status -o json: exit status 2 (236.149814ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-882146","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-882146
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-882146: (1.075402381s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-654934
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-654934: (1.312005253s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m28.63188322s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --driver=kvm2 
E0925 11:17:33.376042   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.381535   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.391832   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.412124   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.452728   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.533198   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:33.694269   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:34.014750   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:17:34.655612   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-882146 --no-kubernetes --driver=kvm2 : (46.710723598s)
--- PASS: TestNoKubernetes/serial/Start (46.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-329196 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-329196 --output=json --layout=cluster
E0925 11:17:35.936509   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-329196 --output=json --layout=cluster: exit status 2 (306.561687ms)

                                                
                                                
-- stdout --
	{"Name":"pause-329196","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-329196","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-329196 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-329196 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-329196 --alsologtostderr -v=5
E0925 11:17:38.497574   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-329196 --alsologtostderr -v=5: (1.254903709s)
--- PASS: TestPause/serial/DeletePaused (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0925 11:17:43.618475   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m31.28402848s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6tggl" [e0725696-e164-4e78-b04d-9c27b173915d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6tggl" [e0725696-e164-4e78-b04d-9c27b173915d] Running
E0925 11:17:53.858723   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.018547198s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-882146 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-882146 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.549265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-882146
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-882146: (2.195032319s)
--- PASS: TestNoKubernetes/serial/Stop (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-882146 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-882146 --driver=kvm2 : (47.899619958s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (133.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m13.216627901s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (133.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jkk6p" [fa7be992-caba-49ab-bab2-8909e6f805be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.027430308s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-299646 replace --force -f testdata/netcat-deployment.yaml: (1.023578502s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-74kbb" [664b91b8-3083-4c8b-b6ce-921dbfa0753b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-74kbb" [664b91b8-3083-4c8b-b6ce-921dbfa0753b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011565608s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-882146 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-882146 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.083307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0925 11:18:55.300264   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m37.927291156s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (118.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m58.352031584s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (118.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dkvx7" [bdbd3d09-ddd4-4017-ba90-2f32b642894c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021466811s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h8m9w" [6c8008f4-ca62-4ae2-aa87-02e4bcb45d0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h8m9w" [6c8008f4-ca62-4ae2-aa87-02e4bcb45d0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.010293436s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0925 11:20:17.220924   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:20:25.175101   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m35.994314578s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g85wq" [50fedd27-6b49-4b20-8ff6-03483d2b4d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g85wq" [50fedd27-6b49-4b20-8ff6-03483d2b4d87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.010769026s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kwrz7" [f1e9d356-990b-4497-ac17-93aa2e1a58a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kwrz7" [f1e9d356-990b-4497-ac17-93aa2e1a58a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.017592188s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tnmsj" [859c1282-dd55-4b64-b182-45dcdacb0ecb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tnmsj" [859c1282-dd55-4b64-b182-45dcdacb0ecb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.020953042s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m46.182087268s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (104.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-299646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m44.366359712s)
--- PASS: TestNetworkPlugins/group/false/Start (104.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rgbvq" [3b2db1de-9a7f-4e94-a597-48ea579ccef7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rgbvq" [3b2db1de-9a7f-4e94-a597-48ea579ccef7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.010481598s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (168.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m48.898316275s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (168.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (155.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-863905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E0925 11:22:33.376114   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-863905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (2m35.058907003s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (155.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-brm9q" [3c46b0ca-43a1-46ee-ad51-aa1c6c3936df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024782201s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-299646 replace --force -f testdata/netcat-deployment.yaml
E0925 11:22:47.582577   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:47.587893   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:47.598183   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:47.618512   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:47.658876   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:47.739729   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vnk5s" [f2777a26-aae2-48f8-9368-833914143cbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 11:22:47.900911   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:48.221675   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:48.862527   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:22:50.143714   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vnk5s" [f2777a26-aae2-48f8-9368-833914143cbd] Running
E0925 11:22:57.825299   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.017919314s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-299646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-299646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c8zsg" [858ba220-ef5b-4dd0-97b9-25f2abb82772] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 11:22:52.704531   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c8zsg" [858ba220-ef5b-4dd0-97b9-25f2abb82772] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.017152074s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-299646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-299646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)
E0925 11:30:25.175534   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:30:27.126654   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:30:29.107205   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:30:30.349913   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:30:31.634541   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-319133 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-319133 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (1m19.270059661s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (100.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-372603 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E0925 11:23:23.963385   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:23:25.244579   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:23:27.805566   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:23:28.223047   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:23:28.549308   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:23:32.926104   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:23:43.167092   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:24:03.647620   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:24:09.510411   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:24:11.162469   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.167726   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.177988   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.198296   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.238562   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.318891   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.479291   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:11.799883   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:12.440218   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:24:13.720494   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-372603 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m40.423871774s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (100.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694015 create -f testdata/busybox.yaml
E0925 11:24:16.280973   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbe695b2-ea42-40bc-8368-dae188c02e90] Pending
helpers_test.go:344: "busybox" [dbe695b2-ea42-40bc-8368-dae188c02e90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dbe695b2-ea42-40bc-8368-dae188c02e90] Running
E0925 11:24:21.401915   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.040119357s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694015 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.448183064s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-694015 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-694015 --alsologtostderr -v=3
E0925 11:24:31.642739   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-694015 --alsologtostderr -v=3: (13.2157154s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-863905 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [856eed12-a902-4438-a959-d19271f39ee5] Pending
helpers_test.go:344: "busybox" [856eed12-a902-4438-a959-d19271f39ee5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [856eed12-a902-4438-a959-d19271f39ee5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.031565237s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-863905 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319133 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a083b17-f122-426b-912e-3b9853f82c3f] Pending
helpers_test.go:344: "busybox" [6a083b17-f122-426b-912e-3b9853f82c3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a083b17-f122-426b-912e-3b9853f82c3f] Running
E0925 11:24:44.608210   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.033771552s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319133 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-863905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-863905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.184902114s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-863905 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015: exit status 7 (67.647538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-694015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-863905 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-863905 --alsologtostderr -v=3: (13.226189234s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-319133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-319133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.149599347s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-319133 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-319133 --alsologtostderr -v=3
E0925 11:24:52.123789   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-319133 --alsologtostderr -v=3: (13.134625373s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-863905 -n no-preload-863905
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-863905 -n no-preload-863905: exit status 7 (71.912397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-863905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-863905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-863905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (5m14.545848248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-863905 -n no-preload-863905
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133: exit status 7 (64.741598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-319133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-319133 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-319133 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m32.306825743s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-372603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-372603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.409510595s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-372603 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-372603 --alsologtostderr -v=3: (13.137250009s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-372603 -n newest-cni-372603
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-372603 -n newest-cni-372603: exit status 7 (53.558094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-372603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (77.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-372603 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E0925 11:25:25.175645   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/ingress-addon-legacy-303206/client.crt: no such file or directory
E0925 11:25:27.126563   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.132346   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.143328   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.163705   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.204804   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.285163   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.445643   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:27.766037   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:28.406572   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:29.687584   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:30.350465   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.355730   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.365918   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.386175   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.426520   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.506872   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.667309   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:30.988105   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:31.430633   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:25:31.629056   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:32.248033   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:32.909268   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:33.084525   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:25:35.469841   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:37.369073   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:40.590893   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:47.609895   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:25:50.831623   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:25:57.911435   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:57.916730   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:57.927013   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:57.947317   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:57.987658   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:58.067965   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:58.228619   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:58.549079   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:25:59.189861   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:26:00.470932   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:26:03.031201   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:26:06.528769   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:26:08.090450   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:26:08.151704   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:26:11.312804   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:26:18.392222   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:26:19.414360   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/addons-686386/client.crt: no such file or directory
E0925 11:26:27.536218   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.541576   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.551848   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.572112   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.612537   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.693606   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:27.854003   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:28.180763   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:28.821377   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:30.102449   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:32.662697   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-372603 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m17.560099369s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-372603 -n newest-cni-372603
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (77.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-372603 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-372603 --alsologtostderr -v=1
E0925 11:26:37.783814   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-372603 -n newest-cni-372603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-372603 -n newest-cni-372603: exit status 2 (261.322111ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-372603 -n newest-cni-372603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-372603 -n newest-cni-372603: exit status 2 (261.317915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-372603 --alsologtostderr -v=1
E0925 11:26:38.872710   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-372603 -n newest-cni-372603
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-372603 -n newest-cni-372603
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-094323 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E0925 11:26:46.064935   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/functional-068222/client.crt: no such file or directory
E0925 11:26:48.024507   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:26:49.051556   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:26:52.273909   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:26:55.004735   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:27:08.505042   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:27:19.833862   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:27:33.375288   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
E0925 11:27:45.263256   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.268563   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.278819   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.299068   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.339408   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.419780   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.580320   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:45.900940   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:46.541671   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:47.582226   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:27:47.790442   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:47.795696   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:47.805953   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:47.822115   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:47.826302   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:47.866752   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:47.947578   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:48.108593   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:48.429489   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:49.070253   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:49.465918   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:27:50.351040   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:27:50.383261   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:52.912027   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-094323 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (1m13.015645178s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094323 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd1de7b3-aa2e-43af-aef5-0b362ef66b55] Pending
helpers_test.go:344: "busybox" [fd1de7b3-aa2e-43af-aef5-0b362ef66b55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0925 11:27:55.503897   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:27:58.032411   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
helpers_test.go:344: "busybox" [fd1de7b3-aa2e-43af-aef5-0b362ef66b55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.032847862s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094323 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-094323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-094323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054029599s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-094323 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-094323 --alsologtostderr -v=3
E0925 11:28:05.744966   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:28:08.273008   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:28:10.972061   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/bridge-299646/client.crt: no such file or directory
E0925 11:28:14.194995   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/enable-default-cni-299646/client.crt: no such file or directory
E0925 11:28:15.271380   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/auto-299646/client.crt: no such file or directory
E0925 11:28:16.448322   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-094323 --alsologtostderr -v=3: (13.119410164s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-094323 -n embed-certs-094323
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-094323 -n embed-certs-094323: exit status 7 (59.755446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-094323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-094323 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E0925 11:28:22.682927   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:28:26.225684   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:28:28.753574   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:28:41.754544   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kubenet-299646/client.crt: no such file or directory
E0925 11:28:50.369122   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/flannel-299646/client.crt: no such file or directory
E0925 11:29:07.186103   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/calico-299646/client.crt: no such file or directory
E0925 11:29:09.713871   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/false-299646/client.crt: no such file or directory
E0925 11:29:11.161844   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:29:11.386228   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/custom-flannel-299646/client.crt: no such file or directory
E0925 11:29:38.845182   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
E0925 11:29:39.492763   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/skaffold-331094/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-094323 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (5m32.150315991s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-094323 -n embed-certs-094323
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-229cv" [0d739fc0-233c-4a5d-a8ed-ced50037b9df] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021263146s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-229cv" [0d739fc0-233c-4a5d-a8ed-ced50037b9df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012956988s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-863905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-863905 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-863905 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-863905 -n no-preload-863905
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-863905 -n no-preload-863905: exit status 2 (264.391018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-863905 -n no-preload-863905
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-863905 -n no-preload-863905: exit status 2 (268.007408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-863905 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-863905 -n no-preload-863905
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-863905 -n no-preload-863905
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rljjm" [d21fe523-ec86-4434-8fad-fd789b37d91f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02015018s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rljjm" [d21fe523-ec86-4434-8fad-fd789b37d91f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017092922s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-319133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-319133 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-319133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133: exit status 2 (248.921713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133: exit status 2 (258.691067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-319133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319133 -n default-k8s-diff-port-319133
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z8spk" [89978f23-6a34-4069-8c3f-2620615d9aae] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0925 11:33:56.422324   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/gvisor-531432/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z8spk" [89978f23-6a34-4069-8c3f-2620615d9aae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.017506629s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z8spk" [89978f23-6a34-4069-8c3f-2620615d9aae] Running
E0925 11:34:11.161786   13213 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/kindnet-299646/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011631922s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-094323 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-094323 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-094323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-094323 -n embed-certs-094323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-094323 -n embed-certs-094323: exit status 2 (219.040312ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-094323 -n embed-certs-094323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-094323 -n embed-certs-094323: exit status 2 (226.227004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-094323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-094323 -n embed-certs-094323
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-094323 -n embed-certs-094323
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                    

Test skip (31/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-299646 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-299646" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-299646

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-299646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299646"

                                                
                                                
----------------------- debugLogs end: cilium-299646 [took: 2.780501346s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-299646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-299646
--- SKIP: TestNetworkPlugins/group/cilium (2.91s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-785493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-785493
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard