Test Report: Docker_Linux_containerd 15310

                    
                      af24d50c21096344c09c5fff0b9181d55a181bf0:2022-11-07:26449
                    
                

Test fail (5/277)

Order failed test Duration
205 TestPreload 360.35
213 TestKubernetesUpgrade 577.93
314 TestNetworkPlugins/group/calico/Start 516.71
331 TestNetworkPlugins/group/bridge/DNS 359.25
334 TestNetworkPlugins/group/enable-default-cni/DNS 351.32
x
+
TestPreload (360.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1107 17:07:54.188048   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (51.205208309s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-170735 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6
E1107 17:09:17.236907   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 17:09:22.808336   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 17:12:04.641419   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:12:54.187718   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 17:13:27.687553   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m4.785054882s)

                                                
                                                
-- stdout --
	* [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node test-preload-170735 in cluster test-preload-170735
	* Pulling base image ...
	* Downloading Kubernetes v1.24.6 preload ...
	* Updating the running docker "test-preload-170735" container ...
	* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	* Configuring CNI (Container Networking Interface) ...
	X Problems detected in kubelet:
	  Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231    4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	  Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	  Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004    4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:08:27.904911  165743 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:08:27.905045  165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:08:27.905060  165743 out.go:309] Setting ErrFile to fd 2...
	I1107 17:08:27.905068  165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:08:27.905197  165743 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:08:27.905863  165743 out.go:303] Setting JSON to false
	I1107 17:08:27.907218  165743 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10261,"bootTime":1667830647,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:08:27.907299  165743 start.go:126] virtualization: kvm guest
	I1107 17:08:27.910260  165743 out.go:177] * [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:08:27.912717  165743 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:08:27.912644  165743 notify.go:220] Checking for updates...
	I1107 17:08:27.914611  165743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:08:27.916178  165743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:08:27.917748  165743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:08:27.919131  165743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:08:27.921065  165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1107 17:08:27.923047  165743 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1107 17:08:27.924546  165743 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:08:27.952793  165743 docker.go:137] docker version: linux-20.10.21
	I1107 17:08:27.952897  165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:08:28.051499  165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:27.973134397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:08:28.051613  165743 docker.go:254] overlay module found
	I1107 17:08:28.054907  165743 out.go:177] * Using the docker driver based on existing profile
	I1107 17:08:28.056422  165743 start.go:282] selected driver: docker
	I1107 17:08:28.056442  165743 start.go:808] validating driver "docker" against &{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:08:28.056553  165743 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:08:28.057351  165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:08:28.151882  165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:28.076276154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:08:28.152201  165743 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:08:28.152232  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:08:28.152241  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:08:28.152260  165743 start_flags.go:317] config:
	{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:08:28.155619  165743 out.go:177] * Starting control plane node test-preload-170735 in cluster test-preload-170735
	I1107 17:08:28.156954  165743 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 17:08:28.158499  165743 out.go:177] * Pulling base image ...
	I1107 17:08:28.159890  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:28.159983  165743 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:08:28.181208  165743 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1107 17:08:28.181243  165743 cache.go:57] Caching tarball of preloaded images
	I1107 17:08:28.181535  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:28.183696  165743 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1107 17:08:28.182675  165743 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:08:28.183727  165743 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:08:28.185282  165743 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:28.211318  165743 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1107 17:08:32.100806  165743 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:32.100913  165743 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:33.024863  165743 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1107 17:08:33.025006  165743 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/config.json ...
	I1107 17:08:33.025200  165743 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:08:33.025245  165743 start.go:364] acquiring machines lock for test-preload-170735: {Name:mkeed53a7896dfd155258ca3d33f2ba7f27b6e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:08:33.025355  165743 start.go:368] acquired machines lock for "test-preload-170735" in 83.257µs
	I1107 17:08:33.025378  165743 start.go:96] Skipping create...Using existing machine configuration
	I1107 17:08:33.025389  165743 fix.go:55] fixHost starting: 
	I1107 17:08:33.025604  165743 cli_runner.go:164] Run: docker container inspect test-preload-170735 --format={{.State.Status}}
	I1107 17:08:33.047785  165743 fix.go:103] recreateIfNeeded on test-preload-170735: state=Running err=<nil>
	W1107 17:08:33.047814  165743 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 17:08:33.051368  165743 out.go:177] * Updating the running docker "test-preload-170735" container ...
	I1107 17:08:33.053014  165743 machine.go:88] provisioning docker machine ...
	I1107 17:08:33.053055  165743 ubuntu.go:169] provisioning hostname "test-preload-170735"
	I1107 17:08:33.053104  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.073975  165743 main.go:134] libmachine: Using SSH client type: native
	I1107 17:08:33.074165  165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1107 17:08:33.074183  165743 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-170735 && echo "test-preload-170735" | sudo tee /etc/hostname
	I1107 17:08:33.197853  165743 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-170735
	
	I1107 17:08:33.197933  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.220254  165743 main.go:134] libmachine: Using SSH client type: native
	I1107 17:08:33.220408  165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1107 17:08:33.220428  165743 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-170735' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-170735/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-170735' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:08:33.333808  165743 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:08:33.333842  165743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
	I1107 17:08:33.333861  165743 ubuntu.go:177] setting up certificates
	I1107 17:08:33.333869  165743 provision.go:83] configureAuth start
	I1107 17:08:33.333914  165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
	I1107 17:08:33.355318  165743 provision.go:138] copyHostCerts
	I1107 17:08:33.355367  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
	I1107 17:08:33.355376  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
	I1107 17:08:33.355441  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
	I1107 17:08:33.355534  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
	I1107 17:08:33.355545  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
	I1107 17:08:33.355581  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
	I1107 17:08:33.355641  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
	I1107 17:08:33.355651  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
	I1107 17:08:33.355689  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
	I1107 17:08:33.355768  165743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.test-preload-170735 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-170735]
	I1107 17:08:33.436719  165743 provision.go:172] copyRemoteCerts
	I1107 17:08:33.436773  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:08:33.436826  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.458416  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.541280  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:08:33.558205  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1107 17:08:33.574372  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 17:08:33.590572  165743 provision.go:86] duration metric: configureAuth took 256.685343ms
	I1107 17:08:33.590604  165743 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:08:33.590765  165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1107 17:08:33.590782  165743 machine.go:91] provisioned docker machine in 537.75012ms
	I1107 17:08:33.590791  165743 start.go:300] post-start starting for "test-preload-170735" (driver="docker")
	I1107 17:08:33.590802  165743 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:08:33.590840  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:08:33.590874  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.613972  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.697134  165743 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:08:33.699654  165743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:08:33.699688  165743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:08:33.699706  165743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:08:33.699715  165743 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:08:33.699735  165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
	I1107 17:08:33.699785  165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
	I1107 17:08:33.699859  165743 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
	I1107 17:08:33.699972  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:08:33.706647  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:08:33.723587  165743 start.go:303] post-start completed in 132.77869ms
	I1107 17:08:33.723655  165743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:08:33.723701  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.745091  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.826766  165743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:08:33.830752  165743 fix.go:57] fixHost completed within 805.356487ms
	I1107 17:08:33.830779  165743 start.go:83] releasing machines lock for "test-preload-170735", held for 805.406949ms
	I1107 17:08:33.830865  165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
	I1107 17:08:33.851188  165743 ssh_runner.go:195] Run: systemctl --version
	I1107 17:08:33.851233  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.851246  165743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 17:08:33.851299  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.874050  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.874539  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.970640  165743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 17:08:33.980208  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:08:33.989283  165743 docker.go:189] disabling docker service ...
	I1107 17:08:33.989328  165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 17:08:33.998251  165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 17:08:34.006544  165743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 17:08:34.105872  165743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 17:08:34.199735  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 17:08:34.208838  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:08:34.221138  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1107 17:08:34.228758  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1107 17:08:34.237433  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1107 17:08:34.245113  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1107 17:08:34.252514  165743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 17:08:34.258488  165743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 17:08:34.264983  165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:08:34.355600  165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:08:34.426498  165743 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 17:08:34.426584  165743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 17:08:34.431077  165743 start.go:472] Will wait 60s for crictl version
	I1107 17:08:34.431141  165743 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:08:34.463332  165743 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-07T17:08:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1107 17:08:45.511931  165743 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:08:45.534402  165743 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1107 17:08:45.534456  165743 ssh_runner.go:195] Run: containerd --version
	I1107 17:08:45.557129  165743 ssh_runner.go:195] Run: containerd --version
	I1107 17:08:45.581034  165743 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1107 17:08:45.583252  165743 cli_runner.go:164] Run: docker network inspect test-preload-170735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:08:45.604171  165743 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1107 17:08:45.607584  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:45.607660  165743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:08:45.629696  165743 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1107 17:08:45.629765  165743 ssh_runner.go:195] Run: which lz4
	I1107 17:08:45.632520  165743 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1107 17:08:45.635397  165743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1107 17:08:45.635419  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1107 17:08:46.608662  165743 containerd.go:496] Took 0.976169 seconds to copy over tarball
	I1107 17:08:46.608757  165743 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 17:08:49.268239  165743 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.659458437s)
	I1107 17:08:49.268269  165743 containerd.go:503] Took 2.659548 seconds t extract the tarball
	I1107 17:08:49.268278  165743 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 17:08:49.290385  165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:08:49.394503  165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:08:49.483535  165743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:08:49.508155  165743 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 17:08:49.508249  165743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:49.508261  165743 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:49.508303  165743 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:49.508328  165743 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:49.508333  165743 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1107 17:08:49.508363  165743 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:49.508413  165743 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:49.508304  165743 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:49.509646  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:49.509674  165743 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:49.509722  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:49.509649  165743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:49.509638  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:49.509650  165743 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1107 17:08:49.509774  165743 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:49.509643  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:49.721200  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1107 17:08:49.721693  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1107 17:08:49.738860  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1107 17:08:49.739213  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1107 17:08:49.747795  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 17:08:49.758483  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1107 17:08:49.761130  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1107 17:08:49.977049  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1107 17:08:50.610195  165743 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1107 17:08:50.610249  165743 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1107 17:08:50.610292  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.614352  165743 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1107 17:08:50.614406  165743 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:50.614453  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.705332  165743 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1107 17:08:50.705390  165743 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:50.705338  165743 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1107 17:08:50.705434  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.705452  165743 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:50.705619  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.717541  165743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1107 17:08:50.717591  165743 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:50.717638  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.719439  165743 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1107 17:08:50.719499  165743 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:50.719544  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.719689  165743 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1107 17:08:50.719723  165743 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:50.719758  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.814270  165743 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1107 17:08:50.814353  165743 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:50.814361  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1107 17:08:50.814382  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.814394  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:50.814410  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:50.814414  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:50.814427  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:50.814384  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:50.814449  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:52.582624  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.768192619s)
	I1107 17:08:52.582662  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1107 17:08:52.582681  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.768236997s)
	I1107 17:08:52.582691  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1107 17:08:52.582637  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.768194557s)
	I1107 17:08:52.582747  165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:08:52.582772  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.768339669s)
	I1107 17:08:52.582798  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1107 17:08:52.582748  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1107 17:08:52.582749  165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.582829  165743 ssh_runner.go:235] Completed: which crictl: (1.768411501s)
	I1107 17:08:52.582855  165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:08:52.582878  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:52.585359  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.770910623s)
	I1107 17:08:52.585380  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1107 17:08:52.585416  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.771036539s)
	I1107 17:08:52.585438  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1107 17:08:52.585502  165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1107 17:08:52.585583  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.771118502s)
	I1107 17:08:52.585599  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1107 17:08:52.587242  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1107 17:08:52.587261  165743 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.587294  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.676919  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1107 17:08:52.677014  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1107 17:08:52.677049  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1107 17:08:52.677110  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1107 17:09:00.039059  165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.451733367s)
	I1107 17:09:00.039096  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1107 17:09:00.039139  165743 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:09:00.039203  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:09:01.824108  165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.784848281s)
	I1107 17:09:01.824150  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1107 17:09:01.824181  165743 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:09:01.824223  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:09:02.321028  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1107 17:09:02.321067  165743 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1107 17:09:02.321122  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1107 17:09:02.521066  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1107 17:09:02.521129  165743 cache_images.go:92] LoadImages completed in 13.012944956s
	W1107 17:09:02.521265  165743 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	I1107 17:09:02.521313  165743 ssh_runner.go:195] Run: sudo crictl info
	I1107 17:09:02.549803  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:09:02.549843  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:09:02.549862  165743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:09:02.549885  165743 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-170735 NodeName:test-preload-170735 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:09:02.550126  165743 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-170735"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:09:02.550287  165743 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-170735 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 17:09:02.550387  165743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1107 17:09:02.558461  165743 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:09:02.558534  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:09:02.609209  165743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1107 17:09:02.622855  165743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:09:02.636362  165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1107 17:09:02.650109  165743 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:09:02.653949  165743 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735 for IP: 192.168.67.2
	I1107 17:09:02.654100  165743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
	I1107 17:09:02.654166  165743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
	I1107 17:09:02.654255  165743 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key
	I1107 17:09:02.654354  165743 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key.c7fa3a9e
	I1107 17:09:02.654418  165743 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key
	I1107 17:09:02.654554  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
	W1107 17:09:02.654595  165743 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
	I1107 17:09:02.654613  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:09:02.654657  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:09:02.654702  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:09:02.654738  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
	I1107 17:09:02.654791  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:09:02.655574  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:09:02.703678  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 17:09:02.723409  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:09:02.742737  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 17:09:02.763001  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:09:02.818366  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 17:09:02.839767  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:09:02.861717  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 17:09:02.910886  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
	I1107 17:09:02.931102  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
	I1107 17:09:02.951804  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:09:03.011717  165743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:09:03.027317  165743 ssh_runner.go:195] Run: openssl version
	I1107 17:09:03.032867  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:09:03.041130  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.044672  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.044721  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.050588  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:09:03.105632  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
	I1107 17:09:03.114215  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.117586  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.117644  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.123353  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
	I1107 17:09:03.131017  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
	I1107 17:09:03.139872  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.143694  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.143738  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.149761  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:09:03.209904  165743 kubeadm.go:396] StartCluster: {Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:09:03.210035  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 17:09:03.210092  165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:09:03.240135  165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
	I1107 17:09:03.240172  165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
	I1107 17:09:03.240181  165743 cri.go:87] found id: ""
	I1107 17:09:03.240225  165743 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1107 17:09:03.327373  165743 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","pid":1641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5/rootfs","created":"2022-11-07T17:07:57.155832841Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","pid":3510,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa/rootfs","created":"2022-11-07T17:08:53.110308717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","pid":3658,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834/rootfs","created":"2022-11-07T17:08:54.456156833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","pid":2180,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd
604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4/rootfs","created":"2022-11-07T17:08:16.602156421Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","pid":3521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0
a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d/rootfs","created":"2022-11-07T17:08:53.110915142Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95/rootfs","created":"2022-11-07T17:07:56.942370634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","pid":3522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","rootfs":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049/rootfs","created":"2022-11-07T17:08:53.027578577Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","pid":2181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba6
8a623","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623/rootfs","created":"2022-11-07T17:08:16.461925695Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","pid":2431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067/rootfs","created":"2022-11-07T17:08:19.802116354Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114/rootfs","created":"2022-11-07T17:08:24.414118976Z","annotati
ons":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e/rootfs","created":"2022-11-07T17:08:53.22282877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-
shares":"102","io.kubernetes.cri.sandbox-id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","pid":3544,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83/rootfs","created":"2022-11-07T17:08:53.114873995Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri
.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","pid":1509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86/rootfs","created":"2022-11-07T17:07:56.942483078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cr
i.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da/rootfs","created":"2022-11-07T17:07:56.942394808Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.c
ri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","pid":2564,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90/rootfs","created":"2022-11-07T17:08:24.30208689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","pid":2247,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8/rootfs","created":"2022-11-07T17:08:16.619320417Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","pid":1639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a/rootfs","created":"2022-11-07T17:07:57.155960118Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-name":"kube-apiserver-tes
t-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593/rootfs","created":"2022-11-07T17:08:24.301147925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-na
me":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74/rootfs","created":"2022-11-07T17:07:56.942447268Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri
.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247/rootfs","created":"2022-11-07T17:08:24.411783378Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d596e727cf71ed6c642b598c327f52552f
ba8f973625380adcf054e3f5d2d1c6","pid":1642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6/rootfs","created":"2022-11-07T17:07:57.156067666Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","pid":3553,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff
7110fcaf80962425381646c55d72cc70f71a263df0342a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a/rootfs","created":"2022-11-07T17:08:53.113518089Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6/rootfs","created":"2022-11-07T17:07:57.156161632Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","pid":3562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b2
7e3b90db040e3d9988132681f6/rootfs","created":"2022-11-07T17:08:53.114973557Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","pid":3518,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fb
de5a10025abb05664ed/rootfs","created":"2022-11-07T17:08:53.111272121Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1107 17:09:03.327859  165743 cri.go:124] list returned 25 containers
	I1107 17:09:03.327880  165743 cri.go:127] container: {ID:0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 Status:running}
	I1107 17:09:03.327898  165743 cri.go:129] skipping 0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 - not in ps
	I1107 17:09:03.327906  165743 cri.go:127] container: {ID:0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa Status:running}
	I1107 17:09:03.327915  165743 cri.go:129] skipping 0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa - not in ps
	I1107 17:09:03.327927  165743 cri.go:127] container: {ID:0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 Status:running}
	I1107 17:09:03.327939  165743 cri.go:133] skipping {0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 running}: state = "running", want "paused"
	I1107 17:09:03.327954  165743 cri.go:127] container: {ID:250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 Status:running}
	I1107 17:09:03.327966  165743 cri.go:129] skipping 250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 - not in ps
	I1107 17:09:03.327973  165743 cri.go:127] container: {ID:2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d Status:running}
	I1107 17:09:03.327986  165743 cri.go:129] skipping 2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d - not in ps
	I1107 17:09:03.328004  165743 cri.go:127] container: {ID:37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 Status:running}
	I1107 17:09:03.328018  165743 cri.go:129] skipping 37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 - not in ps
	I1107 17:09:03.328029  165743 cri.go:127] container: {ID:3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 Status:running}
	I1107 17:09:03.328041  165743 cri.go:129] skipping 3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 - not in ps
	I1107 17:09:03.328047  165743 cri.go:127] container: {ID:415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 Status:running}
	I1107 17:09:03.328060  165743 cri.go:129] skipping 415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 - not in ps
	I1107 17:09:03.328071  165743 cri.go:127] container: {ID:46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 Status:running}
	I1107 17:09:03.328082  165743 cri.go:129] skipping 46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 - not in ps
	I1107 17:09:03.328092  165743 cri.go:127] container: {ID:5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 Status:running}
	I1107 17:09:03.328100  165743 cri.go:129] skipping 5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 - not in ps
	I1107 17:09:03.328107  165743 cri.go:127] container: {ID:5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e Status:running}
	I1107 17:09:03.328121  165743 cri.go:129] skipping 5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e - not in ps
	I1107 17:09:03.328132  165743 cri.go:127] container: {ID:5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 Status:running}
	I1107 17:09:03.328144  165743 cri.go:129] skipping 5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 - not in ps
	I1107 17:09:03.328150  165743 cri.go:127] container: {ID:705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 Status:running}
	I1107 17:09:03.328169  165743 cri.go:129] skipping 705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 - not in ps
	I1107 17:09:03.328181  165743 cri.go:127] container: {ID:76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da Status:running}
	I1107 17:09:03.328188  165743 cri.go:129] skipping 76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da - not in ps
	I1107 17:09:03.328199  165743 cri.go:127] container: {ID:7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 Status:running}
	I1107 17:09:03.328209  165743 cri.go:129] skipping 7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 - not in ps
	I1107 17:09:03.328214  165743 cri.go:127] container: {ID:7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 Status:running}
	I1107 17:09:03.328223  165743 cri.go:129] skipping 7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 - not in ps
	I1107 17:09:03.328229  165743 cri.go:127] container: {ID:9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a Status:running}
	I1107 17:09:03.328241  165743 cri.go:129] skipping 9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a - not in ps
	I1107 17:09:03.328248  165743 cri.go:127] container: {ID:a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 Status:running}
	I1107 17:09:03.328263  165743 cri.go:129] skipping a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 - not in ps
	I1107 17:09:03.328275  165743 cri.go:127] container: {ID:a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 Status:running}
	I1107 17:09:03.328287  165743 cri.go:129] skipping a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 - not in ps
	I1107 17:09:03.328297  165743 cri.go:127] container: {ID:b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 Status:running}
	I1107 17:09:03.328308  165743 cri.go:129] skipping b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 - not in ps
	I1107 17:09:03.328318  165743 cri.go:127] container: {ID:d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 Status:running}
	I1107 17:09:03.328326  165743 cri.go:129] skipping d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 - not in ps
	I1107 17:09:03.328337  165743 cri.go:127] container: {ID:ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a Status:running}
	I1107 17:09:03.328349  165743 cri.go:129] skipping ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a - not in ps
	I1107 17:09:03.328358  165743 cri.go:127] container: {ID:ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 Status:running}
	I1107 17:09:03.328370  165743 cri.go:129] skipping ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 - not in ps
	I1107 17:09:03.328381  165743 cri.go:127] container: {ID:f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 Status:running}
	I1107 17:09:03.328391  165743 cri.go:129] skipping f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 - not in ps
	I1107 17:09:03.328404  165743 cri.go:127] container: {ID:f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed Status:running}
	I1107 17:09:03.328415  165743 cri.go:129] skipping f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed - not in ps
	I1107 17:09:03.328459  165743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:09:03.336550  165743 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 17:09:03.336573  165743 kubeadm.go:627] restartCluster start
	I1107 17:09:03.336628  165743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 17:09:03.344380  165743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.345034  165743 kubeconfig.go:92] found "test-preload-170735" server: "https://192.168.67.2:8443"
	I1107 17:09:03.345729  165743 kapi.go:59] client config for test-preload-170735: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key", CAFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:09:03.346403  165743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 17:09:03.402000  165743 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-07 17:07:52.875254223 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-07 17:09:02.646277681 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1107 17:09:03.402024  165743 kubeadm.go:1114] stopping kube-system containers ...
	I1107 17:09:03.402039  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1107 17:09:03.402098  165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:09:03.431844  165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
	I1107 17:09:03.431899  165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
	I1107 17:09:03.431910  165743 cri.go:87] found id: ""
	I1107 17:09:03.431917  165743 cri.go:232] Stopping containers: [bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834]
	I1107 17:09:03.431974  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:09:03.436330  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834
	I1107 17:09:03.742156  165743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 17:09:03.809643  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:09:03.817012  165743 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  7 17:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  7 17:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Nov  7 17:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  7 17:07 /etc/kubernetes/scheduler.conf
	
	I1107 17:09:03.817084  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 17:09:03.823720  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 17:09:03.830244  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 17:09:03.836663  165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.836710  165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 17:09:03.842795  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 17:09:03.849520  165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.849574  165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 17:09:03.856003  165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:09:03.862911  165743 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 17:09:03.862935  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:04.002289  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.237323  165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234999973s)
	I1107 17:09:05.237359  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.449035  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.504177  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.621639  165743 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:09:05.621702  165743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:09:05.633566  165743 api_server.go:71] duration metric: took 11.935157ms to wait for apiserver process to appear ...
	I1107 17:09:05.633600  165743 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:09:05.633614  165743 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1107 17:09:05.639393  165743 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1107 17:09:05.645496  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:05.645524  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:06.147196  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:06.147277  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:06.646924  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:06.646957  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:07.147645  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:07.147679  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:07.647341  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:07.647372  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1107 17:09:08.146168  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:08.646046  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:09.147144  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:09.646092  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:10.147021  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:10.646973  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:11.146883  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1107 17:09:13.915841  165743 api_server.go:140] control plane version: v1.24.6
	I1107 17:09:13.915921  165743 api_server.go:130] duration metric: took 8.282312967s to wait for apiserver health ...
	I1107 17:09:13.915945  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:09:13.915963  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:09:13.918212  165743 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 17:09:13.919726  165743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 17:09:13.924616  165743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1107 17:09:13.924640  165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1107 17:09:14.021282  165743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 17:09:15.124609  165743 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.103271829s)
	I1107 17:09:15.124658  165743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:09:15.134287  165743 system_pods.go:59] 8 kube-system pods found
	I1107 17:09:15.134343  165743 system_pods.go:61] "coredns-6d4b75cb6d-46n4z" [0bb47afc-9c44-48b3-8dd4-966ed2608a7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 17:09:15.134355  165743 system_pods.go:61] "etcd-test-preload-170735" [bf983595-48b0-4ad3-948e-264fe4654767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 17:09:15.134365  165743 system_pods.go:61] "kindnet-fh9w9" [eca84e65-57b5-4cc9-b42a-0f991c91ffe7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 17:09:15.134375  165743 system_pods.go:61] "kube-apiserver-test-preload-170735" [6005f40b-0034-46af-ac9b-8b7945ea8996] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 17:09:15.134382  165743 system_pods.go:61] "kube-controller-manager-test-preload-170735" [05e955ad-7fc3-4874-97a5-7ba8ee0faf37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 17:09:15.134396  165743 system_pods.go:61] "kube-proxy-lv445" [fcbfbd08-498e-4a9c-8d36-0d45cbd312bd] Running
	I1107 17:09:15.134404  165743 system_pods.go:61] "kube-scheduler-test-preload-170735" [102796b5-9e64-4c55-9ceb-c091fb0faf8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 17:09:15.134416  165743 system_pods.go:61] "storage-provisioner" [c43d0d64-f743-4627-894e-be6b8af2e64d] Running
	I1107 17:09:15.134425  165743 system_pods.go:74] duration metric: took 9.760603ms to wait for pod list to return data ...
	I1107 17:09:15.134434  165743 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:09:15.136728  165743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:09:15.136759  165743 node_conditions.go:123] node cpu capacity is 8
	I1107 17:09:15.136770  165743 node_conditions.go:105] duration metric: took 2.331494ms to run NodePressure ...
	I1107 17:09:15.136786  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:15.388874  165743 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 17:09:15.392441  165743 kubeadm.go:778] kubelet initialised
	I1107 17:09:15.392464  165743 kubeadm.go:779] duration metric: took 3.557352ms waiting for restarted kubelet to initialise ...
	I1107 17:09:15.392473  165743 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:09:15.396706  165743 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:17.406088  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:19.407719  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:21.906077  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:23.906170  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:25.906482  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:28.406244  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:29.906673  165743 pod_ready.go:92] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"True"
	I1107 17:09:29.906708  165743 pod_ready.go:81] duration metric: took 14.509975616s waiting for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:29.906722  165743 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:31.916347  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:33.916395  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:35.917695  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:38.416611  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:40.417341  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:42.917030  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:44.917463  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:47.417821  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:49.916882  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:52.417257  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:54.916575  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:56.916604  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:58.917108  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:01.417633  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:03.917219  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:06.416808  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:08.917079  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:11.417333  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:13.417408  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:15.917166  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:18.415994  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:20.416647  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:22.917094  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:24.919800  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:27.416902  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:29.417714  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:31.917189  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:34.417311  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:36.916350  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:38.917416  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:41.416812  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:43.417080  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:45.916487  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:47.917346  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:50.416654  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:52.917124  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:55.416999  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:57.417311  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:59.916704  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:01.919070  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:04.416758  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:06.416952  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:08.916903  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:11.416562  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:13.417202  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:15.917270  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:18.416813  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:20.917286  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:23.416732  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:25.417405  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:27.916529  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:29.916950  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:32.417231  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:34.916940  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:37.416873  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:39.417294  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:41.916140  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:43.916375  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:45.916655  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:47.916977  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:50.416682  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:52.417097  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:54.916635  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:57.416816  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:59.916263  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:01.916974  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:03.917239  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:06.416793  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:08.417072  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:10.916349  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:13.416821  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:15.916263  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:17.916820  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:19.917768  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:22.416608  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:24.417657  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:26.916718  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:28.916894  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:31.417519  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:33.418814  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:35.916938  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:38.416980  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:40.916839  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:42.917145  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:44.917492  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:47.417047  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:49.916565  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:51.916916  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:54.416695  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:56.419030  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:58.916323  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:00.917565  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:03.416572  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:05.416612  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:07.917363  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:10.416406  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:12.416604  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:14.916267  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:16.916810  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:19.417492  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:21.916818  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:23.917104  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:26.416941  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:28.916283  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:29.912039  165743 pod_ready.go:81] duration metric: took 4m0.005300509s waiting for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
	E1107 17:13:29.912067  165743 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" (will not retry!)
	I1107 17:13:29.912099  165743 pod_ready.go:38] duration metric: took 4m14.519613554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:13:29.912140  165743 kubeadm.go:631] restartCluster took 4m26.575555046s
	W1107 17:13:29.912302  165743 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1107 17:13:29.912357  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:13:31.585704  165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.673321164s)
	I1107 17:13:31.585763  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:13:31.595197  165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:13:31.601977  165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:13:31.602022  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:13:31.608611  165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:13:31.608656  165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:13:31.641698  165743 kubeadm.go:317] W1107 17:13:31.640965    6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:13:31.673782  165743 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:13:31.734442  165743 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:13:31.734566  165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1107 17:13:31.734625  165743 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1107 17:13:31.734689  165743 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1107 17:13:31.734827  165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1107 17:13:31.734917  165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:13:31.736598  165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1107 17:13:31.736666  165743 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:13:31.736791  165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:13:31.736841  165743 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:13:31.736892  165743 kubeadm.go:317] OS: Linux
	I1107 17:13:31.736952  165743 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:13:31.737020  165743 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:13:31.737089  165743 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:13:31.737161  165743 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:13:31.737230  165743 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:13:31.737297  165743 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:13:31.737366  165743 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:13:31.737432  165743 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:13:31.737511  165743 kubeadm.go:317] CGROUPS_BLKIO: enabled
	W1107 17:13:31.737713  165743 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:31.640965    6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:31.640965    6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 17:13:31.737760  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:13:32.054639  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:13:32.063813  165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:13:32.063875  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:13:32.070411  165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:13:32.070456  165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:13:32.107519  165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1107 17:13:32.107565  165743 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:13:32.134497  165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:13:32.134580  165743 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:13:32.134633  165743 kubeadm.go:317] OS: Linux
	I1107 17:13:32.134687  165743 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:13:32.134791  165743 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:13:32.134877  165743 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:13:32.134944  165743 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:13:32.135016  165743 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:13:32.135087  165743 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:13:32.135156  165743 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:13:32.135221  165743 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:13:32.135314  165743 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:13:32.196691  165743 kubeadm.go:317] W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:13:32.196897  165743 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:13:32.197035  165743 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:13:32.197117  165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1107 17:13:32.197155  165743 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1107 17:13:32.197197  165743 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1107 17:13:32.197292  165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1107 17:13:32.197352  165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:13:32.197439  165743 kubeadm.go:398] StartCluster complete in 4m28.987546075s
	I1107 17:13:32.197484  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:13:32.197525  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:13:32.220007  165743 cri.go:87] found id: ""
	I1107 17:13:32.220032  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.220040  165743 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:13:32.220053  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:13:32.220102  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:13:32.242014  165743 cri.go:87] found id: ""
	I1107 17:13:32.242043  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.242053  165743 logs.go:276] No container was found matching "etcd"
	I1107 17:13:32.242066  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:13:32.242112  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:13:32.262942  165743 cri.go:87] found id: ""
	I1107 17:13:32.262979  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.262988  165743 logs.go:276] No container was found matching "coredns"
	I1107 17:13:32.262995  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:13:32.263034  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:13:32.284464  165743 cri.go:87] found id: ""
	I1107 17:13:32.284488  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.284494  165743 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:13:32.284501  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:13:32.284552  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:13:32.307214  165743 cri.go:87] found id: ""
	I1107 17:13:32.307243  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.307252  165743 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:13:32.307260  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:13:32.307310  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:13:32.329151  165743 cri.go:87] found id: ""
	I1107 17:13:32.329180  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.329196  165743 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:13:32.329205  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:13:32.329257  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:13:32.350599  165743 cri.go:87] found id: ""
	I1107 17:13:32.350623  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.350629  165743 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:13:32.350635  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:13:32.350673  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:13:32.372494  165743 cri.go:87] found id: ""
	I1107 17:13:32.372522  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.372532  165743 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:13:32.372545  165743 logs.go:123] Gathering logs for kubelet ...
	I1107 17:13:32.372558  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:13:32.435840  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231    4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436259  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436411  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004    4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436578  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927081    4309 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436766  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927198    4309 projected.go:192] Error preparing data for projected volume kube-api-access-7jl9q for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437177  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927299    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q podName:c43d0d64-f743-4627-894e-be6b8af2e64d nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927284243 +0000 UTC m=+10.478357937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7jl9q" (UniqueName: "kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q") pod "storage-provisioner" (UID: "c43d0d64-f743-4627-894e-be6b8af2e64d") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437330  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927404    4309 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437497  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927466    4309 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437684  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927560    4309 projected.go:192] Error preparing data for projected volume kube-api-access-6vv4c for pod kube-system/kube-proxy-lv445: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438089  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927649    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c podName:fcbfbd08-498e-4a9c-8d36-0d45cbd312bd nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927635728 +0000 UTC m=+10.478709423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6vv4c" (UniqueName: "kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c") pod "kube-proxy-lv445" (UID: "fcbfbd08-498e-4a9c-8d36-0d45cbd312bd") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438269  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927751    4309 projected.go:192] Error preparing data for projected volume kube-api-access-qmxlx for pod kube-system/coredns-6d4b75cb6d-46n4z: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438700  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927842    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx podName:0bb47afc-9c44-48b3-8dd4-966ed2608a7a nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927829872 +0000 UTC m=+10.478903566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qmxlx" (UniqueName: "kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx") pod "coredns-6d4b75cb6d-46n4z" (UID: "0bb47afc-9c44-48b3-8dd4-966ed2608a7a") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438846  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927954    4309 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.439007  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.928028    4309 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.459618  165743 logs.go:123] Gathering logs for dmesg ...
	I1107 17:13:32.459642  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:13:32.475496  165743 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:13:32.475522  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:13:32.524048  165743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:13:32.524077  165743 logs.go:123] Gathering logs for containerd ...
	I1107 17:13:32.524091  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:13:32.579264  165743 logs.go:123] Gathering logs for container status ...
	I1107 17:13:32.579299  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1107 17:13:32.605796  165743 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1107 17:13:32.605835  165743 out.go:239] * 
	* 
	W1107 17:13:32.605973  165743 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:13:32.606006  165743 out.go:239] * 
	* 
	W1107 17:13:32.606836  165743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:13:32.608746  165743 out.go:177] X Problems detected in kubelet:
	I1107 17:13:32.610170  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231    4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.612470  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.614018  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004    4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.616027  165743 out.go:177] 
	W1107 17:13:32.617358  165743 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:13:32.617464  165743 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1107 17:13:32.617526  165743 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1107 17:13:32.619660  165743 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-11-07 17:13:32.664073651 +0000 UTC m=+1686.958491162
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-170735
helpers_test.go:235: (dbg) docker inspect test-preload-170735:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb",
	        "Created": "2022-11-07T17:07:37.332353721Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162554,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:07:37.793781997Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/hosts",
	        "LogPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb-json.log",
	        "Name": "/test-preload-170735",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-170735:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-170735",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d-init/diff:/var/lib/docker/overlay2/50f34786c57872c77d74fc1e1bfc5c830eecdaaa307731f7f0968ecd4a1f1563/diff:/var/lib/docker/overlay2/7bd2077ca57b1a9d268f813d36a75f7979f1fc4acedca337c909926df0984abc/diff:/var/lib/docker/overlay2/fc584b8d731e3e1a78208322d9ad4f5e4ad9c3bcaa0f08927b91ce3c8637e0c1/diff:/var/lib/docker/overlay2/b1015b3e809f7445f186f197e10ccde2f6313a9c6860e2a15469f8efb401040d/diff:/var/lib/docker/overlay2/c333cad43ceb2005c0c4df6e6055a141624b85a82498fdd043cc72ccb83232a2/diff:/var/lib/docker/overlay2/e8adaa498090aa250a4bb91e7b41283b97dd43550202038f2ba75fb6fce1963e/diff:/var/lib/docker/overlay2/21ee34913cc32f41efb30d896d169ee516ce1865cdf9ed62125bad1d7b760ebf/diff:/var/lib/docker/overlay2/1b1e3fc8fc878d0731cfc2e081355a9d88e2832592699aec0d7fdef0b4aa2536/diff:/var/lib/docker/overlay2/4b91e729bf04aac130fb8d8bfcab139c95e0ef3f6a774013de6b68a489234ec6/diff:/var/lib/docker/overlay2/4fa234
40214db584cc2d06610d07177bcb3f52aaa6485fc6d0c5fe8830500eb8/diff:/var/lib/docker/overlay2/16748108f66ccb40a4a3b20805c0085d2865c56f7f76ef79cad24498e9ffe9d0/diff:/var/lib/docker/overlay2/ed8e95539c1661d85da89eceddad9e582c9ea46b80010c6f68d080d92c9d6b5a/diff:/var/lib/docker/overlay2/df5567a2898a9e8a1be97266503eb95798b79e37668e3073e7f439219defa1b1/diff:/var/lib/docker/overlay2/b70d157c56a0610efd610495efa704a0548753e54dc2f98f56c33b18d5bdb831/diff:/var/lib/docker/overlay2/3a1efa8a7fda429b96ee67adce9f25aa586838fff1d0e33a145074eb35f92e3b/diff:/var/lib/docker/overlay2/adec1560668aa1c06d2f672622d778fb7c7a9958814773573f9b9bd167f6c860/diff:/var/lib/docker/overlay2/b092628cb8f256d44c2fbb9ae9bccaf57d2d6209aa4f402d78256949eae7feb3/diff:/var/lib/docker/overlay2/3356cfa5fa7047a97e9c2b7cb8952bdbe042be5633202a2fb86fb78eb24d01c3/diff:/var/lib/docker/overlay2/e2eda1c37c57f4adc2cf7cba48eed6c8ffe3d2f47e31c07d647fd0597cb1aaee/diff:/var/lib/docker/overlay2/0fdab607cc4d78cb0a3fbd3041f4d6f1fabd525b190ca8fe214ce0d708a7f772/diff:/var/lib/d
ocker/overlay2/746235f8e2202d20a55b5a9fea42575d53cbce903cd7196f79b6546eb912216c/diff:/var/lib/docker/overlay2/bb90b859707e89d2d71c36f1d9688d6b09d32b9fce71c1a4caffab9be2bbb188/diff:/var/lib/docker/overlay2/10fdb9cfaf7ec1249107401913d80e6952d57412f21964005f33a1ec0edbc3bc/diff:/var/lib/docker/overlay2/c1af211c834a44cc9932c4e3a12691a9d1d7c2e14e241cb5a8b881d40534523f/diff:/var/lib/docker/overlay2/de7a70af2c1a55113b9be8a92239749d35dd866bda013a8048f5bccbc98a258d/diff:/var/lib/docker/overlay2/638ba6771779e36e94f47227270733bc19e786d6084420c1cb46c8d942883a6b/diff:/var/lib/docker/overlay2/f4e0800cf49a41c3993c1d146cd1613cacaf8996e27b642bc6359f30ae301891/diff:/var/lib/docker/overlay2/0c8275272897551e4e3bd4a403ea631396d4e226e0d1524a973391b15b868f09/diff:/var/lib/docker/overlay2/405eea0895fd24bd6bcbfa316e2f2f55186a3a8c11836a41776b7078210cef3e/diff:/var/lib/docker/overlay2/5344d9cb5a12ef430d7c5246346fdf0be30cf22430cea41ce3eeff0db5b4d629/diff:/var/lib/docker/overlay2/3a1aae2d89cdb6efed9f25c1aa5fc3b09afd34de1dea7ab15bbf250d2c1
ccaeb/diff:/var/lib/docker/overlay2/fe4503be964576b1bd1b38c1789d575ebd1d3a40807fc8fddd0d03689f815101/diff:/var/lib/docker/overlay2/cd964cc10ac76d7d224e0c14361f663890fb1aa42543b9e6aad6231ce574ab75/diff:/var/lib/docker/overlay2/d3b7495eb871dc08a1299ff6623317982ae4fcb245a496232f5ecb3c7db2f65e/diff:/var/lib/docker/overlay2/f47e602141e8a2a0110308ae1e12d31d503b156f1438454b031a4428e38d6fdf/diff:/var/lib/docker/overlay2/2fa5513e215c12fbae0f66df8f9239d68407115fc99d2d61fad469cab8e90074/diff:/var/lib/docker/overlay2/35a81d0664a9558cbb797f91f0936edc4dc40d04124e0e087016a1965853fd34/diff:/var/lib/docker/overlay2/0335b50ae6313640c86195beb2c170e6024ff55e7e7c5d4799d3fb36388be83a/diff:/var/lib/docker/overlay2/4756e235309d1e95924ec8f07ff825ebdcd7384760cb06121fcb6299bbad2e5c/diff:/var/lib/docker/overlay2/b3a9deb3bf75ddb8b41c22ba322da02c3379475903d07dd985bcef4a317a514a/diff:/var/lib/docker/overlay2/2e829bbc0c18a173f30f9904a6e0a3b3dd0b06b9f8e518ddcf6d4b8237876fb8/diff:/var/lib/docker/overlay2/eaf774e8177ba46b1b9f087012edcc4e413aa6
e302e711cb62dae1ca92ac7b5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-170735",
	                "Source": "/var/lib/docker/volumes/test-preload-170735/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-170735",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-170735",
	                "name.minikube.sigs.k8s.io": "test-preload-170735",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a77c3bc8f88f44237e7b8dd35cbcb2dd9891949bc305deffc304cde6b3dee027",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49277"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49276"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49274"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a77c3bc8f88f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-170735": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "562352745c30",
	                        "test-preload-170735"
	                    ],
	                    "NetworkID": "b8cc33fdda8232591e18678d9318c33cc1cb5258fad05652407c6b9a060581e3",
	                    "EndpointID": "e2be8473b88f2e73d93bb7868f13df004e2b698e4c50f74a2673aba5d2152fed",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-170735 -n test-preload-170735
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-170735 -n test-preload-170735: exit status 2 (345.659651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-170735 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-165923 ssh -n                                                                 | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | multinode-165923-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt                       | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | multinode-165923:/home/docker/cp-test_multinode-165923-m03_multinode-165923.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-165923 ssh -n                                                                 | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | multinode-165923-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-165923 ssh -n multinode-165923 sudo cat                                       | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | /home/docker/cp-test_multinode-165923-m03_multinode-165923.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt                       | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | multinode-165923-m02:/home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-165923 ssh -n                                                                 | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | multinode-165923-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-165923 ssh -n multinode-165923-m02 sudo cat                                   | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	|         | /home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-165923 node stop m03                                                          | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
	| node    | multinode-165923 node start                                                             | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-165923                                                                | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC |                     |
	| stop    | -p multinode-165923                                                                     | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC | 07 Nov 22 17:02 UTC |
	| start   | -p multinode-165923                                                                     | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC | 07 Nov 22 17:04 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-165923                                                                | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC |                     |
	| node    | multinode-165923 node delete                                                            | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC | 07 Nov 22 17:04 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-165923 stop                                                                   | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC | 07 Nov 22 17:05 UTC |
	| start   | -p multinode-165923                                                                     | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:05 UTC | 07 Nov 22 17:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-165923                                                                | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC |                     |
	| start   | -p multinode-165923-m02                                                                 | multinode-165923-m02 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-165923-m03                                                                 | multinode-165923-m03 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-165923                                                                 | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC |                     |
	| delete  | -p multinode-165923-m03                                                                 | multinode-165923-m03 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
	| delete  | -p multinode-165923                                                                     | multinode-165923     | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
	| start   | -p test-preload-170735                                                                  | test-preload-170735  | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:08 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-170735                                                                  | test-preload-170735  | jenkins | v1.28.0 | 07 Nov 22 17:08 UTC | 07 Nov 22 17:08 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-170735                                                                  | test-preload-170735  | jenkins | v1.28.0 | 07 Nov 22 17:08 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 17:08:27
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 17:08:27.904911  165743 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:08:27.905045  165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:08:27.905060  165743 out.go:309] Setting ErrFile to fd 2...
	I1107 17:08:27.905068  165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:08:27.905197  165743 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:08:27.905863  165743 out.go:303] Setting JSON to false
	I1107 17:08:27.907218  165743 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10261,"bootTime":1667830647,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:08:27.907299  165743 start.go:126] virtualization: kvm guest
	I1107 17:08:27.910260  165743 out.go:177] * [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:08:27.912717  165743 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:08:27.912644  165743 notify.go:220] Checking for updates...
	I1107 17:08:27.914611  165743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:08:27.916178  165743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:08:27.917748  165743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:08:27.919131  165743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:08:27.921065  165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1107 17:08:27.923047  165743 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1107 17:08:27.924546  165743 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:08:27.952793  165743 docker.go:137] docker version: linux-20.10.21
	I1107 17:08:27.952897  165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:08:28.051499  165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:27.973134397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:08:28.051613  165743 docker.go:254] overlay module found
	I1107 17:08:28.054907  165743 out.go:177] * Using the docker driver based on existing profile
	I1107 17:08:28.056422  165743 start.go:282] selected driver: docker
	I1107 17:08:28.056442  165743 start.go:808] validating driver "docker" against &{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:08:28.056553  165743 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:08:28.057351  165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:08:28.151882  165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:28.076276154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:08:28.152201  165743 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:08:28.152232  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:08:28.152241  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:08:28.152260  165743 start_flags.go:317] config:
	{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:08:28.155619  165743 out.go:177] * Starting control plane node test-preload-170735 in cluster test-preload-170735
	I1107 17:08:28.156954  165743 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 17:08:28.158499  165743 out.go:177] * Pulling base image ...
	I1107 17:08:28.159890  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:28.159983  165743 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:08:28.181208  165743 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1107 17:08:28.181243  165743 cache.go:57] Caching tarball of preloaded images
	I1107 17:08:28.181535  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:28.183696  165743 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1107 17:08:28.182675  165743 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:08:28.183727  165743 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:08:28.185282  165743 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:28.211318  165743 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1107 17:08:32.100806  165743 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:32.100913  165743 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1107 17:08:33.024863  165743 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1107 17:08:33.025006  165743 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/config.json ...
	I1107 17:08:33.025200  165743 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:08:33.025245  165743 start.go:364] acquiring machines lock for test-preload-170735: {Name:mkeed53a7896dfd155258ca3d33f2ba7f27b6e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:08:33.025355  165743 start.go:368] acquired machines lock for "test-preload-170735" in 83.257µs
	I1107 17:08:33.025378  165743 start.go:96] Skipping create...Using existing machine configuration
	I1107 17:08:33.025389  165743 fix.go:55] fixHost starting: 
	I1107 17:08:33.025604  165743 cli_runner.go:164] Run: docker container inspect test-preload-170735 --format={{.State.Status}}
	I1107 17:08:33.047785  165743 fix.go:103] recreateIfNeeded on test-preload-170735: state=Running err=<nil>
	W1107 17:08:33.047814  165743 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 17:08:33.051368  165743 out.go:177] * Updating the running docker "test-preload-170735" container ...
	I1107 17:08:33.053014  165743 machine.go:88] provisioning docker machine ...
	I1107 17:08:33.053055  165743 ubuntu.go:169] provisioning hostname "test-preload-170735"
	I1107 17:08:33.053104  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.073975  165743 main.go:134] libmachine: Using SSH client type: native
	I1107 17:08:33.074165  165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1107 17:08:33.074183  165743 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-170735 && echo "test-preload-170735" | sudo tee /etc/hostname
	I1107 17:08:33.197853  165743 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-170735
	
	I1107 17:08:33.197933  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.220254  165743 main.go:134] libmachine: Using SSH client type: native
	I1107 17:08:33.220408  165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1107 17:08:33.220428  165743 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-170735' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-170735/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-170735' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:08:33.333808  165743 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:08:33.333842  165743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
	I1107 17:08:33.333861  165743 ubuntu.go:177] setting up certificates
	I1107 17:08:33.333869  165743 provision.go:83] configureAuth start
	I1107 17:08:33.333914  165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
	I1107 17:08:33.355318  165743 provision.go:138] copyHostCerts
	I1107 17:08:33.355367  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
	I1107 17:08:33.355376  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
	I1107 17:08:33.355441  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
	I1107 17:08:33.355534  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
	I1107 17:08:33.355545  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
	I1107 17:08:33.355581  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
	I1107 17:08:33.355641  165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
	I1107 17:08:33.355651  165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
	I1107 17:08:33.355689  165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
	I1107 17:08:33.355768  165743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.test-preload-170735 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-170735]
	I1107 17:08:33.436719  165743 provision.go:172] copyRemoteCerts
	I1107 17:08:33.436773  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:08:33.436826  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.458416  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.541280  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:08:33.558205  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1107 17:08:33.574372  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 17:08:33.590572  165743 provision.go:86] duration metric: configureAuth took 256.685343ms
	I1107 17:08:33.590604  165743 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:08:33.590765  165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1107 17:08:33.590782  165743 machine.go:91] provisioned docker machine in 537.75012ms
	I1107 17:08:33.590791  165743 start.go:300] post-start starting for "test-preload-170735" (driver="docker")
	I1107 17:08:33.590802  165743 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:08:33.590840  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:08:33.590874  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.613972  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.697134  165743 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:08:33.699654  165743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:08:33.699688  165743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:08:33.699706  165743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:08:33.699715  165743 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:08:33.699735  165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
	I1107 17:08:33.699785  165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
	I1107 17:08:33.699859  165743 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
	I1107 17:08:33.699972  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:08:33.706647  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:08:33.723587  165743 start.go:303] post-start completed in 132.77869ms
	I1107 17:08:33.723655  165743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:08:33.723701  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.745091  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.826766  165743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:08:33.830752  165743 fix.go:57] fixHost completed within 805.356487ms
	I1107 17:08:33.830779  165743 start.go:83] releasing machines lock for "test-preload-170735", held for 805.406949ms
	I1107 17:08:33.830865  165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
	I1107 17:08:33.851188  165743 ssh_runner.go:195] Run: systemctl --version
	I1107 17:08:33.851233  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.851246  165743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 17:08:33.851299  165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
	I1107 17:08:33.874050  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.874539  165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
	I1107 17:08:33.970640  165743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 17:08:33.980208  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:08:33.989283  165743 docker.go:189] disabling docker service ...
	I1107 17:08:33.989328  165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 17:08:33.998251  165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 17:08:34.006544  165743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 17:08:34.105872  165743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 17:08:34.199735  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 17:08:34.208838  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:08:34.221138  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1107 17:08:34.228758  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1107 17:08:34.237433  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1107 17:08:34.245113  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1107 17:08:34.252514  165743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 17:08:34.258488  165743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 17:08:34.264983  165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:08:34.355600  165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:08:34.426498  165743 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 17:08:34.426584  165743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 17:08:34.431077  165743 start.go:472] Will wait 60s for crictl version
	I1107 17:08:34.431141  165743 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:08:34.463332  165743 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-07T17:08:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1107 17:08:45.511931  165743 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:08:45.534402  165743 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1107 17:08:45.534456  165743 ssh_runner.go:195] Run: containerd --version
	I1107 17:08:45.557129  165743 ssh_runner.go:195] Run: containerd --version
	I1107 17:08:45.581034  165743 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1107 17:08:45.583252  165743 cli_runner.go:164] Run: docker network inspect test-preload-170735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:08:45.604171  165743 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1107 17:08:45.607584  165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1107 17:08:45.607660  165743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:08:45.629696  165743 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1107 17:08:45.629765  165743 ssh_runner.go:195] Run: which lz4
	I1107 17:08:45.632520  165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 17:08:45.635397  165743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1107 17:08:45.635419  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1107 17:08:46.608662  165743 containerd.go:496] Took 0.976169 seconds to copy over tarball
	I1107 17:08:46.608757  165743 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 17:08:49.268239  165743 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.659458437s)
	I1107 17:08:49.268269  165743 containerd.go:503] Took 2.659548 seconds t extract the tarball
	I1107 17:08:49.268278  165743 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 17:08:49.290385  165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:08:49.394503  165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:08:49.483535  165743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:08:49.508155  165743 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 17:08:49.508249  165743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:49.508261  165743 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:49.508303  165743 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:49.508328  165743 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:49.508333  165743 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1107 17:08:49.508363  165743 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:49.508413  165743 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:49.508304  165743 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:49.509646  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:49.509674  165743 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:49.509722  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:49.509649  165743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:49.509638  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:49.509650  165743 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1107 17:08:49.509774  165743 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:49.509643  165743 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:49.721200  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1107 17:08:49.721693  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1107 17:08:49.738860  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1107 17:08:49.739213  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1107 17:08:49.747795  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 17:08:49.758483  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1107 17:08:49.761130  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1107 17:08:49.977049  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1107 17:08:50.610195  165743 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1107 17:08:50.610249  165743 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1107 17:08:50.610292  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.614352  165743 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1107 17:08:50.614406  165743 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:50.614453  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.705332  165743 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1107 17:08:50.705390  165743 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:50.705338  165743 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1107 17:08:50.705434  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.705452  165743 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:50.705619  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.717541  165743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1107 17:08:50.717591  165743 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:50.717638  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.719439  165743 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1107 17:08:50.719499  165743 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:50.719544  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.719689  165743 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1107 17:08:50.719723  165743 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:50.719758  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.814270  165743 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1107 17:08:50.814353  165743 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:50.814361  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1107 17:08:50.814382  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:08:50.814394  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1107 17:08:50.814410  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1107 17:08:50.814414  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:08:50.814427  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1107 17:08:50.814384  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1107 17:08:50.814449  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1107 17:08:52.582624  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.768192619s)
	I1107 17:08:52.582662  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1107 17:08:52.582681  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.768236997s)
	I1107 17:08:52.582691  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1107 17:08:52.582637  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.768194557s)
	I1107 17:08:52.582747  165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:08:52.582772  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.768339669s)
	I1107 17:08:52.582798  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1107 17:08:52.582748  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1107 17:08:52.582749  165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.582829  165743 ssh_runner.go:235] Completed: which crictl: (1.768411501s)
	I1107 17:08:52.582855  165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:08:52.582878  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1107 17:08:52.585359  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.770910623s)
	I1107 17:08:52.585380  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1107 17:08:52.585416  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.771036539s)
	I1107 17:08:52.585438  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1107 17:08:52.585502  165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1107 17:08:52.585583  165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.771118502s)
	I1107 17:08:52.585599  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1107 17:08:52.587242  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1107 17:08:52.587261  165743 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.587294  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1107 17:08:52.676919  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1107 17:08:52.677014  165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1107 17:08:52.677049  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1107 17:08:52.677110  165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1107 17:09:00.039059  165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.451733367s)
	I1107 17:09:00.039096  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1107 17:09:00.039139  165743 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:09:00.039203  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1107 17:09:01.824108  165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.784848281s)
	I1107 17:09:01.824150  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1107 17:09:01.824181  165743 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:09:01.824223  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:09:02.321028  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1107 17:09:02.321067  165743 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1107 17:09:02.321122  165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1107 17:09:02.521066  165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1107 17:09:02.521129  165743 cache_images.go:92] LoadImages completed in 13.012944956s
	W1107 17:09:02.521265  165743 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
	I1107 17:09:02.521313  165743 ssh_runner.go:195] Run: sudo crictl info
	I1107 17:09:02.549803  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:09:02.549843  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:09:02.549862  165743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:09:02.549885  165743 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-170735 NodeName:test-preload-170735 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:09:02.550126  165743 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-170735"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:09:02.550287  165743 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-170735 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 17:09:02.550387  165743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1107 17:09:02.558461  165743 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:09:02.558534  165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:09:02.609209  165743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1107 17:09:02.622855  165743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:09:02.636362  165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1107 17:09:02.650109  165743 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:09:02.653949  165743 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735 for IP: 192.168.67.2
	I1107 17:09:02.654100  165743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
	I1107 17:09:02.654166  165743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
	I1107 17:09:02.654255  165743 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key
	I1107 17:09:02.654354  165743 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key.c7fa3a9e
	I1107 17:09:02.654418  165743 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key
	I1107 17:09:02.654554  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
	W1107 17:09:02.654595  165743 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
	I1107 17:09:02.654613  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:09:02.654657  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:09:02.654702  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:09:02.654738  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
	I1107 17:09:02.654791  165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:09:02.655574  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:09:02.703678  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 17:09:02.723409  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:09:02.742737  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 17:09:02.763001  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:09:02.818366  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 17:09:02.839767  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:09:02.861717  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 17:09:02.910886  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
	I1107 17:09:02.931102  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
	I1107 17:09:02.951804  165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:09:03.011717  165743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:09:03.027317  165743 ssh_runner.go:195] Run: openssl version
	I1107 17:09:03.032867  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:09:03.041130  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.044672  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.044721  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:09:03.050588  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:09:03.105632  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
	I1107 17:09:03.114215  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.117586  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.117644  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
	I1107 17:09:03.123353  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
	I1107 17:09:03.131017  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
	I1107 17:09:03.139872  165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.143694  165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.143738  165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
	I1107 17:09:03.149761  165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:09:03.209904  165743 kubeadm.go:396] StartCluster: {Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:09:03.210035  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 17:09:03.210092  165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:09:03.240135  165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
	I1107 17:09:03.240172  165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
	I1107 17:09:03.240181  165743 cri.go:87] found id: ""
	I1107 17:09:03.240225  165743 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1107 17:09:03.327373  165743 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","pid":1641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5/rootfs","created":"2022-11-07T17:07:57.155832841Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","pid":3510,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa/rootfs","created":"2022-11-07T17:08:53.110308717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","pid":3658,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834/rootfs","created":"2022-11-07T17:08:54.456156833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","pid":2180,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd
604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4/rootfs","created":"2022-11-07T17:08:16.602156421Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","pid":3521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0
a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d/rootfs","created":"2022-11-07T17:08:53.110915142Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95/rootfs","created":"2022-11-07T17:07:56.942370634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","pid":3522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","rootfs":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049/rootfs","created":"2022-11-07T17:08:53.027578577Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","pid":2181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba6
8a623","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623/rootfs","created":"2022-11-07T17:08:16.461925695Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","pid":2431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067/rootfs","created":"2022-11-07T17:08:19.802116354Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114/rootfs","created":"2022-11-07T17:08:24.414118976Z","annotati
ons":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e/rootfs","created":"2022-11-07T17:08:53.22282877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-
shares":"102","io.kubernetes.cri.sandbox-id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","pid":3544,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83/rootfs","created":"2022-11-07T17:08:53.114873995Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri
.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","pid":1509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86/rootfs","created":"2022-11-07T17:07:56.942483078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cr
i.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da/rootfs","created":"2022-11-07T17:07:56.942394808Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.c
ri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","pid":2564,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90/rootfs","created":"2022-11-07T17:08:24.30208689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","pid":2247,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8/rootfs","created":"2022-11-07T17:08:16.619320417Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","pid":1639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a/rootfs","created":"2022-11-07T17:07:57.155960118Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-name":"kube-apiserver-tes
t-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593/rootfs","created":"2022-11-07T17:08:24.301147925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-na
me":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74/rootfs","created":"2022-11-07T17:07:56.942447268Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri
.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247/rootfs","created":"2022-11-07T17:08:24.411783378Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d596e727cf71ed6c642b598c327f52552f
ba8f973625380adcf054e3f5d2d1c6","pid":1642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6/rootfs","created":"2022-11-07T17:07:57.156067666Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","pid":3553,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff
7110fcaf80962425381646c55d72cc70f71a263df0342a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a/rootfs","created":"2022-11-07T17:08:53.113518089Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6/rootfs","created":"2022-11-07T17:07:57.156161632Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","pid":3562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b2
7e3b90db040e3d9988132681f6/rootfs","created":"2022-11-07T17:08:53.114973557Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","pid":3518,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fb
de5a10025abb05664ed/rootfs","created":"2022-11-07T17:08:53.111272121Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1107 17:09:03.327859  165743 cri.go:124] list returned 25 containers
	I1107 17:09:03.327880  165743 cri.go:127] container: {ID:0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 Status:running}
	I1107 17:09:03.327898  165743 cri.go:129] skipping 0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 - not in ps
	I1107 17:09:03.327906  165743 cri.go:127] container: {ID:0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa Status:running}
	I1107 17:09:03.327915  165743 cri.go:129] skipping 0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa - not in ps
	I1107 17:09:03.327927  165743 cri.go:127] container: {ID:0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 Status:running}
	I1107 17:09:03.327939  165743 cri.go:133] skipping {0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 running}: state = "running", want "paused"
	I1107 17:09:03.327954  165743 cri.go:127] container: {ID:250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 Status:running}
	I1107 17:09:03.327966  165743 cri.go:129] skipping 250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 - not in ps
	I1107 17:09:03.327973  165743 cri.go:127] container: {ID:2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d Status:running}
	I1107 17:09:03.327986  165743 cri.go:129] skipping 2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d - not in ps
	I1107 17:09:03.328004  165743 cri.go:127] container: {ID:37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 Status:running}
	I1107 17:09:03.328018  165743 cri.go:129] skipping 37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 - not in ps
	I1107 17:09:03.328029  165743 cri.go:127] container: {ID:3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 Status:running}
	I1107 17:09:03.328041  165743 cri.go:129] skipping 3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 - not in ps
	I1107 17:09:03.328047  165743 cri.go:127] container: {ID:415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 Status:running}
	I1107 17:09:03.328060  165743 cri.go:129] skipping 415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 - not in ps
	I1107 17:09:03.328071  165743 cri.go:127] container: {ID:46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 Status:running}
	I1107 17:09:03.328082  165743 cri.go:129] skipping 46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 - not in ps
	I1107 17:09:03.328092  165743 cri.go:127] container: {ID:5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 Status:running}
	I1107 17:09:03.328100  165743 cri.go:129] skipping 5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 - not in ps
	I1107 17:09:03.328107  165743 cri.go:127] container: {ID:5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e Status:running}
	I1107 17:09:03.328121  165743 cri.go:129] skipping 5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e - not in ps
	I1107 17:09:03.328132  165743 cri.go:127] container: {ID:5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 Status:running}
	I1107 17:09:03.328144  165743 cri.go:129] skipping 5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 - not in ps
	I1107 17:09:03.328150  165743 cri.go:127] container: {ID:705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 Status:running}
	I1107 17:09:03.328169  165743 cri.go:129] skipping 705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 - not in ps
	I1107 17:09:03.328181  165743 cri.go:127] container: {ID:76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da Status:running}
	I1107 17:09:03.328188  165743 cri.go:129] skipping 76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da - not in ps
	I1107 17:09:03.328199  165743 cri.go:127] container: {ID:7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 Status:running}
	I1107 17:09:03.328209  165743 cri.go:129] skipping 7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 - not in ps
	I1107 17:09:03.328214  165743 cri.go:127] container: {ID:7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 Status:running}
	I1107 17:09:03.328223  165743 cri.go:129] skipping 7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 - not in ps
	I1107 17:09:03.328229  165743 cri.go:127] container: {ID:9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a Status:running}
	I1107 17:09:03.328241  165743 cri.go:129] skipping 9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a - not in ps
	I1107 17:09:03.328248  165743 cri.go:127] container: {ID:a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 Status:running}
	I1107 17:09:03.328263  165743 cri.go:129] skipping a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 - not in ps
	I1107 17:09:03.328275  165743 cri.go:127] container: {ID:a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 Status:running}
	I1107 17:09:03.328287  165743 cri.go:129] skipping a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 - not in ps
	I1107 17:09:03.328297  165743 cri.go:127] container: {ID:b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 Status:running}
	I1107 17:09:03.328308  165743 cri.go:129] skipping b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 - not in ps
	I1107 17:09:03.328318  165743 cri.go:127] container: {ID:d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 Status:running}
	I1107 17:09:03.328326  165743 cri.go:129] skipping d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 - not in ps
	I1107 17:09:03.328337  165743 cri.go:127] container: {ID:ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a Status:running}
	I1107 17:09:03.328349  165743 cri.go:129] skipping ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a - not in ps
	I1107 17:09:03.328358  165743 cri.go:127] container: {ID:ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 Status:running}
	I1107 17:09:03.328370  165743 cri.go:129] skipping ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 - not in ps
	I1107 17:09:03.328381  165743 cri.go:127] container: {ID:f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 Status:running}
	I1107 17:09:03.328391  165743 cri.go:129] skipping f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 - not in ps
	I1107 17:09:03.328404  165743 cri.go:127] container: {ID:f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed Status:running}
	I1107 17:09:03.328415  165743 cri.go:129] skipping f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed - not in ps
	I1107 17:09:03.328459  165743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:09:03.336550  165743 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 17:09:03.336573  165743 kubeadm.go:627] restartCluster start
	I1107 17:09:03.336628  165743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 17:09:03.344380  165743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.345034  165743 kubeconfig.go:92] found "test-preload-170735" server: "https://192.168.67.2:8443"
	I1107 17:09:03.345729  165743 kapi.go:59] client config for test-preload-170735: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key", CAFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:09:03.346403  165743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 17:09:03.402000  165743 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-07 17:07:52.875254223 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-07 17:09:02.646277681 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1107 17:09:03.402024  165743 kubeadm.go:1114] stopping kube-system containers ...
	I1107 17:09:03.402039  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1107 17:09:03.402098  165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:09:03.431844  165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
	I1107 17:09:03.431899  165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
	I1107 17:09:03.431910  165743 cri.go:87] found id: ""
	I1107 17:09:03.431917  165743 cri.go:232] Stopping containers: [bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834]
	I1107 17:09:03.431974  165743 ssh_runner.go:195] Run: which crictl
	I1107 17:09:03.436330  165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834
	I1107 17:09:03.742156  165743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 17:09:03.809643  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:09:03.817012  165743 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  7 17:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  7 17:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Nov  7 17:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  7 17:07 /etc/kubernetes/scheduler.conf
	
	I1107 17:09:03.817084  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 17:09:03.823720  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 17:09:03.830244  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 17:09:03.836663  165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.836710  165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 17:09:03.842795  165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 17:09:03.849520  165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:09:03.849574  165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 17:09:03.856003  165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:09:03.862911  165743 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 17:09:03.862935  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:04.002289  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.237323  165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234999973s)
	I1107 17:09:05.237359  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.449035  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.504177  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:05.621639  165743 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:09:05.621702  165743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:09:05.633566  165743 api_server.go:71] duration metric: took 11.935157ms to wait for apiserver process to appear ...
	I1107 17:09:05.633600  165743 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:09:05.633614  165743 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1107 17:09:05.639393  165743 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1107 17:09:05.645496  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:05.645524  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:06.147196  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:06.147277  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:06.646924  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:06.646957  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:07.147645  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:07.147679  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1107 17:09:07.647341  165743 api_server.go:140] control plane version: v1.24.4
	W1107 17:09:07.647372  165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1107 17:09:08.146168  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:08.646046  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:09.147144  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:09.646092  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:10.147021  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:10.646973  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:09:11.146883  165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1107 17:09:13.915841  165743 api_server.go:140] control plane version: v1.24.6
	I1107 17:09:13.915921  165743 api_server.go:130] duration metric: took 8.282312967s to wait for apiserver health ...
	I1107 17:09:13.915945  165743 cni.go:95] Creating CNI manager for ""
	I1107 17:09:13.915963  165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:09:13.918212  165743 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 17:09:13.919726  165743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 17:09:13.924616  165743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1107 17:09:13.924640  165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1107 17:09:14.021282  165743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 17:09:15.124609  165743 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.103271829s)
	I1107 17:09:15.124658  165743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:09:15.134287  165743 system_pods.go:59] 8 kube-system pods found
	I1107 17:09:15.134343  165743 system_pods.go:61] "coredns-6d4b75cb6d-46n4z" [0bb47afc-9c44-48b3-8dd4-966ed2608a7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 17:09:15.134355  165743 system_pods.go:61] "etcd-test-preload-170735" [bf983595-48b0-4ad3-948e-264fe4654767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 17:09:15.134365  165743 system_pods.go:61] "kindnet-fh9w9" [eca84e65-57b5-4cc9-b42a-0f991c91ffe7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 17:09:15.134375  165743 system_pods.go:61] "kube-apiserver-test-preload-170735" [6005f40b-0034-46af-ac9b-8b7945ea8996] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 17:09:15.134382  165743 system_pods.go:61] "kube-controller-manager-test-preload-170735" [05e955ad-7fc3-4874-97a5-7ba8ee0faf37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 17:09:15.134396  165743 system_pods.go:61] "kube-proxy-lv445" [fcbfbd08-498e-4a9c-8d36-0d45cbd312bd] Running
	I1107 17:09:15.134404  165743 system_pods.go:61] "kube-scheduler-test-preload-170735" [102796b5-9e64-4c55-9ceb-c091fb0faf8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 17:09:15.134416  165743 system_pods.go:61] "storage-provisioner" [c43d0d64-f743-4627-894e-be6b8af2e64d] Running
	I1107 17:09:15.134425  165743 system_pods.go:74] duration metric: took 9.760603ms to wait for pod list to return data ...
	I1107 17:09:15.134434  165743 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:09:15.136728  165743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:09:15.136759  165743 node_conditions.go:123] node cpu capacity is 8
	I1107 17:09:15.136770  165743 node_conditions.go:105] duration metric: took 2.331494ms to run NodePressure ...
	I1107 17:09:15.136786  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:09:15.388874  165743 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 17:09:15.392441  165743 kubeadm.go:778] kubelet initialised
	I1107 17:09:15.392464  165743 kubeadm.go:779] duration metric: took 3.557352ms waiting for restarted kubelet to initialise ...
	I1107 17:09:15.392473  165743 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:09:15.396706  165743 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:17.406088  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:19.407719  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:21.906077  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:23.906170  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:25.906482  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:28.406244  165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:29.906673  165743 pod_ready.go:92] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"True"
	I1107 17:09:29.906708  165743 pod_ready.go:81] duration metric: took 14.509975616s waiting for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:29.906722  165743 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
	I1107 17:09:31.916347  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:33.916395  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:35.917695  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:38.416611  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:40.417341  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:42.917030  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:44.917463  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:47.417821  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:49.916882  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:52.417257  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:54.916575  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:56.916604  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:09:58.917108  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:01.417633  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:03.917219  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:06.416808  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:08.917079  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:11.417333  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:13.417408  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:15.917166  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:18.415994  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:20.416647  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:22.917094  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:24.919800  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:27.416902  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:29.417714  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:31.917189  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:34.417311  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:36.916350  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:38.917416  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:41.416812  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:43.417080  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:45.916487  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:47.917346  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:50.416654  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:52.917124  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:55.416999  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:57.417311  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:10:59.916704  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:01.919070  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:04.416758  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:06.416952  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:08.916903  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:11.416562  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:13.417202  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:15.917270  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:18.416813  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:20.917286  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:23.416732  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:25.417405  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:27.916529  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:29.916950  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:32.417231  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:34.916940  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:37.416873  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:39.417294  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:41.916140  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:43.916375  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:45.916655  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:47.916977  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:50.416682  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:52.417097  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:54.916635  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:57.416816  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:11:59.916263  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:01.916974  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:03.917239  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:06.416793  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:08.417072  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:10.916349  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:13.416821  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:15.916263  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:17.916820  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:19.917768  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:22.416608  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:24.417657  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:26.916718  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:28.916894  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:31.417519  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:33.418814  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:35.916938  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:38.416980  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:40.916839  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:42.917145  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:44.917492  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:47.417047  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:49.916565  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:51.916916  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:54.416695  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:56.419030  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:12:58.916323  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:00.917565  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:03.416572  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:05.416612  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:07.917363  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:10.416406  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:12.416604  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:14.916267  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:16.916810  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:19.417492  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:21.916818  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:23.917104  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:26.416941  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:28.916283  165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
	I1107 17:13:29.912039  165743 pod_ready.go:81] duration metric: took 4m0.005300509s waiting for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
	E1107 17:13:29.912067  165743 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" (will not retry!)
	I1107 17:13:29.912099  165743 pod_ready.go:38] duration metric: took 4m14.519613554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:13:29.912140  165743 kubeadm.go:631] restartCluster took 4m26.575555046s
	W1107 17:13:29.912302  165743 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1107 17:13:29.912357  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:13:31.585704  165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.673321164s)
	I1107 17:13:31.585763  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:13:31.595197  165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:13:31.601977  165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:13:31.602022  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:13:31.608611  165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:13:31.608656  165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:13:31.641698  165743 kubeadm.go:317] W1107 17:13:31.640965    6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:13:31.673782  165743 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:13:31.734442  165743 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:13:31.734566  165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1107 17:13:31.734625  165743 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1107 17:13:31.734689  165743 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1107 17:13:31.734827  165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1107 17:13:31.734917  165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:13:31.736598  165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1107 17:13:31.736666  165743 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:13:31.736791  165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:13:31.736841  165743 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:13:31.736892  165743 kubeadm.go:317] OS: Linux
	I1107 17:13:31.736952  165743 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:13:31.737020  165743 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:13:31.737089  165743 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:13:31.737161  165743 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:13:31.737230  165743 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:13:31.737297  165743 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:13:31.737366  165743 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:13:31.737432  165743 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:13:31.737511  165743 kubeadm.go:317] CGROUPS_BLKIO: enabled
	W1107 17:13:31.737713  165743 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:31.640965    6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 17:13:31.737760  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:13:32.054639  165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:13:32.063813  165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:13:32.063875  165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:13:32.070411  165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:13:32.070456  165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:13:32.107519  165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1107 17:13:32.107565  165743 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:13:32.134497  165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:13:32.134580  165743 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:13:32.134633  165743 kubeadm.go:317] OS: Linux
	I1107 17:13:32.134687  165743 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:13:32.134791  165743 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:13:32.134877  165743 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:13:32.134944  165743 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:13:32.135016  165743 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:13:32.135087  165743 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:13:32.135156  165743 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:13:32.135221  165743 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:13:32.135314  165743 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:13:32.196691  165743 kubeadm.go:317] W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:13:32.196897  165743 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:13:32.197035  165743 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:13:32.197117  165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1107 17:13:32.197155  165743 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1107 17:13:32.197197  165743 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1107 17:13:32.197292  165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1107 17:13:32.197352  165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:13:32.197439  165743 kubeadm.go:398] StartCluster complete in 4m28.987546075s
	I1107 17:13:32.197484  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:13:32.197525  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:13:32.220007  165743 cri.go:87] found id: ""
	I1107 17:13:32.220032  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.220040  165743 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:13:32.220053  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:13:32.220102  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:13:32.242014  165743 cri.go:87] found id: ""
	I1107 17:13:32.242043  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.242053  165743 logs.go:276] No container was found matching "etcd"
	I1107 17:13:32.242066  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:13:32.242112  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:13:32.262942  165743 cri.go:87] found id: ""
	I1107 17:13:32.262979  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.262988  165743 logs.go:276] No container was found matching "coredns"
	I1107 17:13:32.262995  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:13:32.263034  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:13:32.284464  165743 cri.go:87] found id: ""
	I1107 17:13:32.284488  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.284494  165743 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:13:32.284501  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:13:32.284552  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:13:32.307214  165743 cri.go:87] found id: ""
	I1107 17:13:32.307243  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.307252  165743 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:13:32.307260  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:13:32.307310  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:13:32.329151  165743 cri.go:87] found id: ""
	I1107 17:13:32.329180  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.329196  165743 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:13:32.329205  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:13:32.329257  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:13:32.350599  165743 cri.go:87] found id: ""
	I1107 17:13:32.350623  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.350629  165743 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:13:32.350635  165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:13:32.350673  165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:13:32.372494  165743 cri.go:87] found id: ""
	I1107 17:13:32.372522  165743 logs.go:274] 0 containers: []
	W1107 17:13:32.372532  165743 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:13:32.372545  165743 logs.go:123] Gathering logs for kubelet ...
	I1107 17:13:32.372558  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:13:32.435840  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231    4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436259  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436411  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004    4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436578  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927081    4309 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.436766  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927198    4309 projected.go:192] Error preparing data for projected volume kube-api-access-7jl9q for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437177  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927299    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q podName:c43d0d64-f743-4627-894e-be6b8af2e64d nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927284243 +0000 UTC m=+10.478357937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7jl9q" (UniqueName: "kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q") pod "storage-provisioner" (UID: "c43d0d64-f743-4627-894e-be6b8af2e64d") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437330  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927404    4309 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437497  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927466    4309 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.437684  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927560    4309 projected.go:192] Error preparing data for projected volume kube-api-access-6vv4c for pod kube-system/kube-proxy-lv445: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438089  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927649    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c podName:fcbfbd08-498e-4a9c-8d36-0d45cbd312bd nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927635728 +0000 UTC m=+10.478709423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6vv4c" (UniqueName: "kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c") pod "kube-proxy-lv445" (UID: "fcbfbd08-498e-4a9c-8d36-0d45cbd312bd") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438269  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927751    4309 projected.go:192] Error preparing data for projected volume kube-api-access-qmxlx for pod kube-system/coredns-6d4b75cb6d-46n4z: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438700  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927842    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx podName:0bb47afc-9c44-48b3-8dd4-966ed2608a7a nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927829872 +0000 UTC m=+10.478903566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qmxlx" (UniqueName: "kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx") pod "coredns-6d4b75cb6d-46n4z" (UID: "0bb47afc-9c44-48b3-8dd4-966ed2608a7a") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.438846  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927954    4309 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	W1107 17:13:32.439007  165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.928028    4309 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.459618  165743 logs.go:123] Gathering logs for dmesg ...
	I1107 17:13:32.459642  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:13:32.475496  165743 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:13:32.475522  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:13:32.524048  165743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:13:32.524077  165743 logs.go:123] Gathering logs for containerd ...
	I1107 17:13:32.524091  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:13:32.579264  165743 logs.go:123] Gathering logs for container status ...
	I1107 17:13:32.579299  165743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1107 17:13:32.605796  165743 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1107 17:13:32.605835  165743 out.go:239] * 
	W1107 17:13:32.605973  165743 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:13:32.606006  165743 out.go:239] * 
	W1107 17:13:32.606836  165743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:13:32.608746  165743 out.go:177] X Problems detected in kubelet:
	I1107 17:13:32.610170  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231    4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.612470  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837    4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.614018  165743 out.go:177]   Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004    4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
	I1107 17:13:32.616027  165743 out.go:177] 
	W1107 17:13:32.617358  165743 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1107 17:13:32.102889    6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:13:32.617464  165743 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1107 17:13:32.617526  165743 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1107 17:13:32.619660  165743 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2022-11-07 17:07:38 UTC, end at Mon 2022-11-07 17:13:33 UTC. --
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.864870660Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.879687889Z" level=info msg="StopPodSandbox for \"this\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.879728872Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.894595969Z" level=info msg="StopPodSandbox for \"endpoint\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.894640594Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.909779827Z" level=info msg="StopPodSandbox for \"is\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.909819766Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.925069979Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.925123093Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.940916581Z" level=info msg="StopPodSandbox for \"please\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.940969746Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.956375043Z" level=info msg="StopPodSandbox for \"consider\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.956425277Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.971528771Z" level=info msg="StopPodSandbox for \"using\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.971574795Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.987107574Z" level=info msg="StopPodSandbox for \"full\""
	Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.987161603Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.002503858Z" level=info msg="StopPodSandbox for \"URL\""
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.002563853Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.017614591Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.017655062Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.033595722Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.033644064Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.049862204Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.049903989Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.007365] FS-Cache: O-key=[8] '1ca20f0200000000'
	[  +0.004971] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007955] FS-Cache: N-cookie d=00000000e1ebe1e0{9p.inode} n=00000000b53001db
	[  +0.008740] FS-Cache: N-key=[8] '1ca20f0200000000'
	[  +0.435035] FS-Cache: Duplicate cookie detected
	[  +0.004685] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006792] FS-Cache: O-cookie d=00000000e1ebe1e0{9p.inode} n=0000000049910c82
	[  +0.007358] FS-Cache: O-key=[8] '21a20f0200000000'
	[  +0.004958] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006600] FS-Cache: N-cookie d=00000000e1ebe1e0{9p.inode} n=00000000b4cbcea0
	[  +0.008738] FS-Cache: N-key=[8] '21a20f0200000000'
	[Nov 7 16:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
	[  +0.000012] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
	[  +1.024597] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
	[  +0.000006] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
	[  +2.011803] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
	[  +0.000030] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
	[  +4.223544] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
	[  +0.000031] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
	[  +0.000031] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:09] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000789] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.014707] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  17:13:33 up  2:56,  0 users,  load average: 0.24, 0.59, 0.89
	Linux test-preload-170735 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:07:38 UTC, end at Mon 2022-11-07 17:13:33 UTC. --
	Nov 07 17:11:55 test-preload-170735 kubelet[4309]: I1107 17:11:55.154694    4309 scope.go:110] "RemoveContainer" containerID="219f5216a4e8bc821bf33efb21542714b74cdc65a8ad7bc02582f4633cbd6da9"
	Nov 07 17:11:55 test-preload-170735 kubelet[4309]: I1107 17:11:55.155028    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:11:55 test-preload-170735 kubelet[4309]: E1107 17:11:55.155418    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:11:56 test-preload-170735 kubelet[4309]: I1107 17:11:56.538873    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:11:56 test-preload-170735 kubelet[4309]: E1107 17:11:56.539298    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:11:57 test-preload-170735 kubelet[4309]: I1107 17:11:57.160726    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:11:57 test-preload-170735 kubelet[4309]: E1107 17:11:57.161056    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:11:58 test-preload-170735 kubelet[4309]: I1107 17:11:58.162285    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:11:58 test-preload-170735 kubelet[4309]: E1107 17:11:58.162639    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:12:11 test-preload-170735 kubelet[4309]: I1107 17:12:11.703510    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:12:11 test-preload-170735 kubelet[4309]: E1107 17:12:11.703871    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:12:25 test-preload-170735 kubelet[4309]: I1107 17:12:25.704094    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:12:25 test-preload-170735 kubelet[4309]: E1107 17:12:25.704442    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:12:40 test-preload-170735 kubelet[4309]: I1107 17:12:40.703609    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:12:40 test-preload-170735 kubelet[4309]: E1107 17:12:40.703993    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:12:54 test-preload-170735 kubelet[4309]: I1107 17:12:54.703818    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:12:54 test-preload-170735 kubelet[4309]: E1107 17:12:54.704169    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:13:07 test-preload-170735 kubelet[4309]: I1107 17:13:07.703611    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:13:07 test-preload-170735 kubelet[4309]: E1107 17:13:07.703938    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:13:22 test-preload-170735 kubelet[4309]: I1107 17:13:22.703867    4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
	Nov 07 17:13:22 test-preload-170735 kubelet[4309]: E1107 17:13:22.704422    4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
	Nov 07 17:13:30 test-preload-170735 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 07 17:13:30 test-preload-170735 kubelet[4309]: I1107 17:13:30.025109    4309 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 07 17:13:30 test-preload-170735 systemd[1]: kubelet.service: Succeeded.
	Nov 07 17:13:30 test-preload-170735 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:13:33.640482  170436 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-170735 -n test-preload-170735
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-170735 -n test-preload-170735: exit status 2 (339.575218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-170735" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-170735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-170735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-170735: (1.996488777s)
--- FAIL: TestPreload (360.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (577.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171701 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171701 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.179911678s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171701

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171701: (4.789437445s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171701 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171701 status --format={{.Host}}: exit status 7 (123.906206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171701 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1107 17:17:54.187038   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171701 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m46.230760978s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171701] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-171701 in cluster kubernetes-upgrade-171701
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-171701" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Nov 07 17:25:44 kubernetes-upgrade-171701 kubelet[12470]: E1107 17:25:44.431940   12470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12480]: E1107 17:25:45.192163   12480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12492]: E1107 17:25:45.940674   12492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:17:48.634913  209319 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:17:48.635286  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:48.635302  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:17:48.635310  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:48.635590  209319 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:17:48.636373  209319 out.go:303] Setting JSON to false
	I1107 17:17:48.638252  209319 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10822,"bootTime":1667830647,"procs":721,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:17:48.638351  209319 start.go:126] virtualization: kvm guest
	I1107 17:17:48.640502  209319 out.go:177] * [kubernetes-upgrade-171701] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:17:48.642120  209319 notify.go:220] Checking for updates...
	I1107 17:17:48.644351  209319 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:17:48.645967  209319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:17:48.647579  209319 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:17:48.649222  209319 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:17:48.650814  209319 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:17:48.652900  209319 config.go:180] Loaded profile config "kubernetes-upgrade-171701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1107 17:17:48.653543  209319 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:17:48.693316  209319 docker.go:137] docker version: linux-20.10.21
	I1107 17:17:48.693433  209319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:48.841990  209319 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:88 SystemTime:2022-11-07 17:17:48.721061459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:48.842136  209319 docker.go:254] overlay module found
	I1107 17:17:48.845183  209319 out.go:177] * Using the docker driver based on existing profile
	I1107 17:17:48.846752  209319 start.go:282] selected driver: docker
	I1107 17:17:48.846789  209319 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-171701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171701 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:48.846917  209319 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:17:48.848123  209319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:49.052576  209319 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:84 SystemTime:2022-11-07 17:17:48.899489272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:49.052862  209319 cni.go:95] Creating CNI manager for ""
	I1107 17:17:49.052891  209319 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:17:49.052914  209319 start_flags.go:317] config:
	{Name:kubernetes-upgrade-171701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171701 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:49.055926  209319 out.go:177] * Starting control plane node kubernetes-upgrade-171701 in cluster kubernetes-upgrade-171701
	I1107 17:17:49.057383  209319 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 17:17:49.059018  209319 out.go:177] * Pulling base image ...
	I1107 17:17:49.060434  209319 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:17:49.060490  209319 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1107 17:17:49.060506  209319 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:17:49.060511  209319 cache.go:57] Caching tarball of preloaded images
	I1107 17:17:49.060745  209319 preload.go:174] Found /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:17:49.060767  209319 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1107 17:17:49.060911  209319 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/config.json ...
	I1107 17:17:49.090905  209319 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:17:49.090936  209319 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:17:49.090956  209319 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:17:49.090996  209319 start.go:364] acquiring machines lock for kubernetes-upgrade-171701: {Name:mk3edf06fba77fc57fdb8a0f925cae571f9287a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:17:49.091117  209319 start.go:368] acquired machines lock for "kubernetes-upgrade-171701" in 86.717µs
	I1107 17:17:49.091139  209319 start.go:96] Skipping create...Using existing machine configuration
	I1107 17:17:49.091145  209319 fix.go:55] fixHost starting: 
	I1107 17:17:49.091356  209319 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-171701 --format={{.State.Status}}
	I1107 17:17:49.119304  209319 fix.go:103] recreateIfNeeded on kubernetes-upgrade-171701: state=Stopped err=<nil>
	W1107 17:17:49.119340  209319 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 17:17:49.121768  209319 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-171701" ...
	I1107 17:17:49.123515  209319 cli_runner.go:164] Run: docker start kubernetes-upgrade-171701
	I1107 17:17:49.986239  209319 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-171701 --format={{.State.Status}}
	I1107 17:17:50.020270  209319 kic.go:415] container "kubernetes-upgrade-171701" state is running.
	I1107 17:17:50.020803  209319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171701
	I1107 17:17:50.064965  209319 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/config.json ...
	I1107 17:17:50.065213  209319 machine.go:88] provisioning docker machine ...
	I1107 17:17:50.065247  209319 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-171701"
	I1107 17:17:50.065304  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:50.095096  209319 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:50.095299  209319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49342 <nil> <nil>}
	I1107 17:17:50.095324  209319 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-171701 && echo "kubernetes-upgrade-171701" | sudo tee /etc/hostname
	I1107 17:17:50.095902  209319 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34298->127.0.0.1:49342: read: connection reset by peer
	I1107 17:17:53.238388  209319 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-171701
	
	I1107 17:17:53.238473  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:53.267254  209319 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:53.267462  209319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49342 <nil> <nil>}
	I1107 17:17:53.267496  209319 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-171701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-171701/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-171701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:17:53.394347  209319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:17:53.394380  209319 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
	I1107 17:17:53.394429  209319 ubuntu.go:177] setting up certificates
	I1107 17:17:53.394441  209319 provision.go:83] configureAuth start
	I1107 17:17:53.394507  209319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171701
	I1107 17:17:53.423587  209319 provision.go:138] copyHostCerts
	I1107 17:17:53.423656  209319 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
	I1107 17:17:53.423673  209319 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
	I1107 17:17:53.423745  209319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
	I1107 17:17:53.423855  209319 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
	I1107 17:17:53.423870  209319 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
	I1107 17:17:53.423912  209319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
	I1107 17:17:53.424000  209319 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
	I1107 17:17:53.424015  209319 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
	I1107 17:17:53.424049  209319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
	I1107 17:17:53.424137  209319 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-171701 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-171701]
	I1107 17:17:53.531731  209319 provision.go:172] copyRemoteCerts
	I1107 17:17:53.531803  209319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:17:53.531860  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:53.559504  209319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49342 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/kubernetes-upgrade-171701/id_rsa Username:docker}
	I1107 17:17:53.651823  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:17:53.673303  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1107 17:17:53.690889  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 17:17:53.709085  209319 provision.go:86] duration metric: configureAuth took 314.62368ms
	I1107 17:17:53.709130  209319 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:17:53.709342  209319 config.go:180] Loaded profile config "kubernetes-upgrade-171701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:17:53.709357  209319 machine.go:91] provisioned docker machine in 3.644132808s
	I1107 17:17:53.709368  209319 start.go:300] post-start starting for "kubernetes-upgrade-171701" (driver="docker")
	I1107 17:17:53.709377  209319 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:17:53.709454  209319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:17:53.709503  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:53.742412  209319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49342 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/kubernetes-upgrade-171701/id_rsa Username:docker}
	I1107 17:17:53.831678  209319 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:17:53.834830  209319 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:17:53.834861  209319 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:17:53.834875  209319 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:17:53.834884  209319 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:17:53.834896  209319 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
	I1107 17:17:53.834946  209319 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
	I1107 17:17:53.835044  209319 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
	I1107 17:17:53.835157  209319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:17:53.843350  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:17:53.867804  209319 start.go:303] post-start completed in 158.419632ms
	I1107 17:17:53.867880  209319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:17:53.867940  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:53.895400  209319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49342 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/kubernetes-upgrade-171701/id_rsa Username:docker}
	I1107 17:17:53.984292  209319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:17:53.988323  209319 fix.go:57] fixHost completed within 4.897171501s
	I1107 17:17:53.988348  209319 start.go:83] releasing machines lock for "kubernetes-upgrade-171701", held for 4.897216283s
	I1107 17:17:53.988421  209319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171701
	I1107 17:17:54.014551  209319 ssh_runner.go:195] Run: systemctl --version
	I1107 17:17:54.014630  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:54.014564  209319 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:17:54.014812  209319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171701
	I1107 17:17:54.045832  209319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49342 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/kubernetes-upgrade-171701/id_rsa Username:docker}
	I1107 17:17:54.047882  209319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49342 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/kubernetes-upgrade-171701/id_rsa Username:docker}
	I1107 17:17:54.167082  209319 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 17:17:54.181750  209319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:17:54.192461  209319 docker.go:189] disabling docker service ...
	I1107 17:17:54.192508  209319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 17:17:54.202277  209319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 17:17:54.213606  209319 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 17:17:54.294213  209319 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 17:17:54.388811  209319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 17:17:54.398283  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:17:54.413383  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1107 17:17:54.422306  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1107 17:17:54.432883  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1107 17:17:54.442633  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1107 17:17:54.452838  209319 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 17:17:54.461201  209319 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 17:17:54.469327  209319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:54.567715  209319 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:17:54.655422  209319 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 17:17:54.655500  209319 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 17:17:54.659792  209319 start.go:472] Will wait 60s for crictl version
	I1107 17:17:54.659849  209319 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:17:54.694796  209319 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-07T17:17:54Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1107 17:18:05.742440  209319 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:18:05.795987  209319 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1107 17:18:05.796074  209319 ssh_runner.go:195] Run: containerd --version
	I1107 17:18:05.834989  209319 ssh_runner.go:195] Run: containerd --version
	I1107 17:18:05.981001  209319 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1107 17:18:06.064630  209319 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-171701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:18:06.088281  209319 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1107 17:18:06.091719  209319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:18:06.177734  209319 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1107 17:18:06.261413  209319 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:18:06.261528  209319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:18:06.291820  209319 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I1107 17:18:06.291878  209319 ssh_runner.go:195] Run: which lz4
	I1107 17:18:06.294890  209319 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1107 17:18:06.297762  209319 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1107 17:18:06.297792  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (669534256 bytes)
	I1107 17:18:07.896951  209319 containerd.go:496] Took 1.602082 seconds to copy over tarball
	I1107 17:18:07.897023  209319 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 17:18:11.548104  209319 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.651020028s)
	I1107 17:18:11.548144  209319 containerd.go:503] Took 3.651161 seconds t extract the tarball
	I1107 17:18:11.548159  209319 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 17:18:11.695894  209319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:18:11.792894  209319 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:18:11.897144  209319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:18:11.937371  209319 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 17:18:11.937471  209319 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 17:18:11.937520  209319 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:18:11.937735  209319 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I1107 17:18:11.937747  209319 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I1107 17:18:11.937771  209319 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I1107 17:18:11.937894  209319 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I1107 17:18:11.937751  209319 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I1107 17:18:11.938007  209319 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I1107 17:18:11.940009  209319 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I1107 17:18:11.940042  209319 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I1107 17:18:11.940056  209319 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 17:18:11.940072  209319 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I1107 17:18:11.940097  209319 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I1107 17:18:11.940008  209319 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:18:11.940015  209319 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I1107 17:18:11.940237  209319 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I1107 17:18:12.097689  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I1107 17:18:12.099712  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I1107 17:18:12.104999  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I1107 17:18:12.107320  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I1107 17:18:12.109733  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I1107 17:18:12.115814  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I1107 17:18:12.118941  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I1107 17:18:12.183527  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 17:18:13.033343  209319 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I1107 17:18:13.033399  209319 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I1107 17:18:13.033442  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.033496  209319 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I1107 17:18:13.033539  209319 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I1107 17:18:13.033570  209319 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I1107 17:18:13.033586  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.033600  209319 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I1107 17:18:13.033440  209319 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I1107 17:18:13.033642  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.033660  209319 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I1107 17:18:13.033690  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.033731  209319 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I1107 17:18:13.033753  209319 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 17:18:13.033783  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.033783  209319 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I1107 17:18:13.033804  209319 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I1107 17:18:13.033837  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.049018  209319 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1107 17:18:13.049066  209319 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:18:13.049078  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I1107 17:18:13.049104  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.049142  209319 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I1107 17:18:13.049205  209319 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I1107 17:18:13.049218  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I1107 17:18:13.049240  209319 ssh_runner.go:195] Run: which crictl
	I1107 17:18:13.049279  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I1107 17:18:13.049288  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I1107 17:18:13.049333  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I1107 17:18:13.049375  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 17:18:14.425915  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8: (1.376804819s)
	I1107 17:18:14.425948  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I1107 17:18:14.426035  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I1107 17:18:14.426111  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3: (1.376764471s)
	I1107 17:18:14.426128  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I1107 17:18:14.426197  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I1107 17:18:14.429665  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3: (1.380363898s)
	I1107 17:18:14.429687  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I1107 17:18:14.429750  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1107 17:18:14.429805  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3: (1.380505121s)
	I1107 17:18:14.429814  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I1107 17:18:14.429863  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I1107 17:18:14.429946  209319 ssh_runner.go:235] Completed: which crictl: (1.380693407s)
	I1107 17:18:14.429976  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I1107 17:18:14.430033  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3: (1.380795033s)
	I1107 17:18:14.430041  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I1107 17:18:14.430087  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1107 17:18:14.430160  209319 ssh_runner.go:235] Completed: which crictl: (1.381037763s)
	I1107 17:18:14.430189  209319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:18:14.430261  209319 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3: (1.380853296s)
	I1107 17:18:14.430278  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I1107 17:18:14.430371  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1107 17:18:14.435550  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I1107 17:18:14.435592  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I1107 17:18:14.545716  209319 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I1107 17:18:14.545823  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I1107 17:18:14.731678  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I1107 17:18:14.731714  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I1107 17:18:14.731772  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I1107 17:18:14.731798  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I1107 17:18:14.731802  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I1107 17:18:14.731814  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I1107 17:18:14.731838  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I1107 17:18:14.731877  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I1107 17:18:14.731888  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	I1107 17:18:14.731916  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I1107 17:18:14.759348  209319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1107 17:18:14.759445  209319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:18:14.759555  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I1107 17:18:14.759585  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I1107 17:18:14.775676  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I1107 17:18:14.786077  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I1107 17:18:14.786114  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I1107 17:18:14.876110  209319 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1107 17:18:14.876168  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1107 17:18:15.116934  209319 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1107 17:18:15.116997  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1107 17:18:16.438457  209319 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3: (1.321430336s)
	I1107 17:18:16.438491  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I1107 17:18:16.438511  209319 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I1107 17:18:16.438551  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I1107 17:18:17.223052  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I1107 17:18:17.223142  209319 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I1107 17:18:17.223215  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I1107 17:18:17.945773  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I1107 17:18:17.945825  209319 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:18:17.945884  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1107 17:18:18.413731  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1107 17:18:18.413771  209319 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1107 17:18:18.413818  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1107 17:18:19.890165  209319 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (1.476313961s)
	I1107 17:18:19.890200  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I1107 17:18:19.890232  209319 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1107 17:18:19.890270  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1107 17:18:21.221078  209319 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.330779489s)
	I1107 17:18:21.221107  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I1107 17:18:21.221139  209319 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I1107 17:18:21.221174  209319 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I1107 17:18:26.251989  209319 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (5.030780685s)
	I1107 17:18:26.252017  209319 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I1107 17:18:26.252042  209319 cache_images.go:123] Successfully loaded all cached images
	I1107 17:18:26.252047  209319 cache_images.go:92] LoadImages completed in 14.314641971s
	I1107 17:18:26.252101  209319 ssh_runner.go:195] Run: sudo crictl info
	I1107 17:18:26.275256  209319 cni.go:95] Creating CNI manager for ""
	I1107 17:18:26.275293  209319 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:18:26.275308  209319 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:18:26.275321  209319 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-171701 NodeName:kubernetes-upgrade-171701 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:18:26.275482  209319 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-171701"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:18:26.275582  209319 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-171701 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171701 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 17:18:26.275634  209319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:18:26.282637  209319 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:18:26.282694  209319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:18:26.289138  209319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
	I1107 17:18:26.301136  209319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:18:26.313557  209319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I1107 17:18:26.326154  209319 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:18:26.329041  209319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:18:26.374494  209319 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701 for IP: 192.168.67.2
	I1107 17:18:26.374637  209319 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
	I1107 17:18:26.374686  209319 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
	I1107 17:18:26.374771  209319 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/client.key
	I1107 17:18:26.374841  209319 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/apiserver.key.c7fa3a9e
	I1107 17:18:26.374891  209319 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/proxy-client.key
	I1107 17:18:26.375008  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
	W1107 17:18:26.375045  209319 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
	I1107 17:18:26.375062  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:18:26.375099  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:18:26.375134  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:18:26.375162  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
	I1107 17:18:26.375240  209319 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:18:26.376022  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:18:26.393552  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 17:18:26.410243  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:18:26.426910  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 17:18:26.443826  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:18:26.460613  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 17:18:26.478756  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:18:26.497087  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 17:18:26.514622  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
	I1107 17:18:26.531465  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:18:26.548118  209319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
	I1107 17:18:26.564709  209319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:18:26.576925  209319 ssh_runner.go:195] Run: openssl version
	I1107 17:18:26.581678  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:18:26.588708  209319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:26.591740  209319 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:26.591787  209319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:26.596374  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:18:26.602919  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
	I1107 17:18:26.610086  209319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
	I1107 17:18:26.612948  209319 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/51176.pem
	I1107 17:18:26.612994  209319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
	I1107 17:18:26.617782  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
	I1107 17:18:26.624268  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
	I1107 17:18:26.631180  209319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
	I1107 17:18:26.634453  209319 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/511762.pem
	I1107 17:18:26.634510  209319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
	I1107 17:18:26.639117  209319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:18:26.645609  209319 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-171701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171701 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:18:26.645689  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 17:18:26.645733  209319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:18:26.668604  209319 cri.go:87] found id: ""
	I1107 17:18:26.668682  209319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:18:26.675319  209319 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 17:18:26.675345  209319 kubeadm.go:627] restartCluster start
	I1107 17:18:26.675388  209319 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 17:18:26.681465  209319 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:18:26.682231  209319 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-171701" does not appear in /home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:18:26.682680  209319 kubeconfig.go:146] "kubernetes-upgrade-171701" context is missing from /home/jenkins/minikube-integration/15310-44720/kubeconfig - will repair!
	I1107 17:18:26.683410  209319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/kubeconfig: {Name:mk626f4fda2bff4e217db2cf8a2887eea6970f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:26.739454  209319 kapi.go:59] client config for kubernetes-upgrade-171701: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kubernetes-upgrade-171701/client.key", CAFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:18:26.740088  209319 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 17:18:26.747463  209319 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-07 17:17:12.991638034 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-07 17:18:26.319019488 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-171701
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1107 17:18:26.747489  209319 kubeadm.go:1114] stopping kube-system containers ...
	I1107 17:18:26.747514  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1107 17:18:26.747559  209319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:18:26.776114  209319 cri.go:87] found id: ""
	I1107 17:18:26.776210  209319 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 17:18:26.786261  209319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:18:26.793256  209319 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Nov  7 17:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Nov  7 17:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Nov  7 17:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Nov  7 17:17 /etc/kubernetes/scheduler.conf
	
	I1107 17:18:26.793318  209319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 17:18:26.800000  209319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 17:18:26.806341  209319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 17:18:26.813011  209319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 17:18:26.819339  209319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:18:26.877930  209319 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 17:18:26.877959  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:18:26.921117  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:18:27.918584  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:18:28.124647  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:18:28.177128  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:18:28.232874  209319 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:18:28.232939  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:28.744479  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:29.244012  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:29.744269  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:30.244845  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:30.744535  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:31.244079  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:31.743882  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:32.244773  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:32.743882  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:33.243978  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:33.744343  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:34.244370  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:34.744753  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:35.244139  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:35.743876  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:36.244510  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:36.744128  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:37.244256  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:37.744558  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:38.243948  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:38.744376  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:39.244147  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:39.744039  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:40.244076  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:40.744193  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:41.244594  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:41.743969  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:42.244436  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:42.744807  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:43.243906  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:43.744455  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:44.244824  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:44.744118  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:45.244500  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:45.744715  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:46.244024  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:46.744057  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:47.244758  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:47.744056  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:48.244332  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:48.743835  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:49.243944  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:49.744720  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:50.243935  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:50.744209  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:51.244036  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:51.744255  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:52.244453  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:52.744200  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:53.244542  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:53.744356  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:54.243893  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:54.744456  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:55.244818  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:55.743958  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:56.244713  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:56.744803  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:57.244517  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:57.744391  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:58.244543  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:58.744645  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:59.244772  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:18:59.744060  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:00.244854  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:00.744257  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:01.244035  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:01.744427  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:02.244368  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:02.744808  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:03.244022  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:03.744834  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:04.244473  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:04.744069  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:05.243899  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:05.744644  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:06.244030  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:06.744631  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:07.243911  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:07.744466  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:08.244789  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:08.744879  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:09.243904  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:09.743947  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:10.243817  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:10.744533  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:11.244086  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:11.744677  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:12.244514  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:12.744728  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:13.244702  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:13.744408  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:14.244236  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:14.744788  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:15.243974  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:15.744558  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:16.243898  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:16.744228  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:17.244265  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:17.744285  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:18.244270  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:18.744223  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:19.244499  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:19.744831  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:20.244264  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:20.743863  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:21.244574  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:21.744216  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:22.243807  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:22.744687  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:23.244267  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:23.744624  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:24.244641  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:24.744406  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:25.244416  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:25.744587  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:26.244653  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:26.744339  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:27.244338  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:27.744011  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:28.243884  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:19:28.243953  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:19:28.270485  209319 cri.go:87] found id: ""
	I1107 17:19:28.270512  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.270521  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:19:28.270529  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:19:28.270582  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:19:28.296718  209319 cri.go:87] found id: ""
	I1107 17:19:28.296751  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.296761  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:19:28.296770  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:19:28.296834  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:19:28.327376  209319 cri.go:87] found id: ""
	I1107 17:19:28.327412  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.327422  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:19:28.327430  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:19:28.327481  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:19:28.364452  209319 cri.go:87] found id: ""
	I1107 17:19:28.364486  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.364496  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:19:28.364505  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:19:28.364566  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:19:28.392836  209319 cri.go:87] found id: ""
	I1107 17:19:28.392865  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.392874  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:19:28.392883  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:19:28.392935  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:19:28.424429  209319 cri.go:87] found id: ""
	I1107 17:19:28.424459  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.424467  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:19:28.424476  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:19:28.424524  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:19:28.451827  209319 cri.go:87] found id: ""
	I1107 17:19:28.451862  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.451872  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:19:28.451882  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:19:28.451935  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:19:28.495767  209319 cri.go:87] found id: ""
	I1107 17:19:28.495800  209319 logs.go:274] 0 containers: []
	W1107 17:19:28.495810  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:19:28.495823  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:19:28.495838  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:19:28.517119  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:19:28.517152  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:19:28.610917  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:19:28.610944  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:19:28.610958  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:19:28.662478  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:19:28.662525  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:19:28.694714  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:19:28.694750  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:19:28.714407  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:38 kubernetes-upgrade-171701 kubelet[1363]: E1107 17:18:38.692572    1363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.715024  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:39 kubernetes-upgrade-171701 kubelet[1377]: E1107 17:18:39.443383    1377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.715635  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:40 kubernetes-upgrade-171701 kubelet[1391]: E1107 17:18:40.178896    1391 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.716235  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:40 kubernetes-upgrade-171701 kubelet[1406]: E1107 17:18:40.931662    1406 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.716844  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:41 kubernetes-upgrade-171701 kubelet[1419]: E1107 17:18:41.679991    1419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.717418  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:42 kubernetes-upgrade-171701 kubelet[1434]: E1107 17:18:42.431107    1434 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.718025  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:43 kubernetes-upgrade-171701 kubelet[1448]: E1107 17:18:43.181536    1448 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.718627  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:43 kubernetes-upgrade-171701 kubelet[1463]: E1107 17:18:43.930404    1463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.719235  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:44 kubernetes-upgrade-171701 kubelet[1476]: E1107 17:18:44.679523    1476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.719645  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:45 kubernetes-upgrade-171701 kubelet[1492]: E1107 17:18:45.430356    1492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.720011  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:46 kubernetes-upgrade-171701 kubelet[1505]: E1107 17:18:46.180117    1505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.720396  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:46 kubernetes-upgrade-171701 kubelet[1520]: E1107 17:18:46.931770    1520 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.720945  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:47 kubernetes-upgrade-171701 kubelet[1532]: E1107 17:18:47.681135    1532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.721547  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:48 kubernetes-upgrade-171701 kubelet[1548]: E1107 17:18:48.429166    1548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.722130  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:49 kubernetes-upgrade-171701 kubelet[1561]: E1107 17:18:49.180728    1561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.722789  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:49 kubernetes-upgrade-171701 kubelet[1576]: E1107 17:18:49.931586    1576 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.723378  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:50 kubernetes-upgrade-171701 kubelet[1589]: E1107 17:18:50.687124    1589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.724011  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:51 kubernetes-upgrade-171701 kubelet[1605]: E1107 17:18:51.432212    1605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.724629  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:52 kubernetes-upgrade-171701 kubelet[1618]: E1107 17:18:52.195003    1618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.725239  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:52 kubernetes-upgrade-171701 kubelet[1633]: E1107 17:18:52.950353    1633 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.725848  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:53 kubernetes-upgrade-171701 kubelet[1645]: E1107 17:18:53.691557    1645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.726565  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:54 kubernetes-upgrade-171701 kubelet[1660]: E1107 17:18:54.447293    1660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.727193  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:55 kubernetes-upgrade-171701 kubelet[1671]: E1107 17:18:55.188811    1671 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.727809  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:55 kubernetes-upgrade-171701 kubelet[1685]: E1107 17:18:55.950061    1685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.728403  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:56 kubernetes-upgrade-171701 kubelet[1697]: E1107 17:18:56.716712    1697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.728836  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:57 kubernetes-upgrade-171701 kubelet[1710]: E1107 17:18:57.445214    1710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.729320  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:58 kubernetes-upgrade-171701 kubelet[1721]: E1107 17:18:58.198358    1721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.729916  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:58 kubernetes-upgrade-171701 kubelet[1736]: E1107 17:18:58.935901    1736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.730671  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:59 kubernetes-upgrade-171701 kubelet[1748]: E1107 17:18:59.700503    1748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.731264  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:00 kubernetes-upgrade-171701 kubelet[1762]: E1107 17:19:00.450961    1762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.731899  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1774]: E1107 17:19:01.211217    1774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.732493  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1788]: E1107 17:19:01.951888    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.733089  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:02 kubernetes-upgrade-171701 kubelet[1799]: E1107 17:19:02.695941    1799 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.733560  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:03 kubernetes-upgrade-171701 kubelet[1813]: E1107 17:19:03.466174    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.734015  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1827]: E1107 17:19:04.192476    1827 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.734544  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1843]: E1107 17:19:04.950039    1843 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.735095  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:05 kubernetes-upgrade-171701 kubelet[1854]: E1107 17:19:05.690960    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.735646  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:06 kubernetes-upgrade-171701 kubelet[1869]: E1107 17:19:06.447852    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.736206  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1882]: E1107 17:19:07.181778    1882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.736817  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1898]: E1107 17:19:07.932726    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.737341  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:08 kubernetes-upgrade-171701 kubelet[1911]: E1107 17:19:08.696361    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.737856  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:09 kubernetes-upgrade-171701 kubelet[1926]: E1107 17:19:09.445104    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.738505  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1938]: E1107 17:19:10.191469    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.739064  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1954]: E1107 17:19:10.969721    1954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.739645  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:11 kubernetes-upgrade-171701 kubelet[1967]: E1107 17:19:11.691353    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.740079  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:12 kubernetes-upgrade-171701 kubelet[1981]: E1107 17:19:12.449542    1981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.740663  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[1993]: E1107 17:19:13.193306    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.741298  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[2007]: E1107 17:19:13.940859    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.741888  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:14 kubernetes-upgrade-171701 kubelet[2020]: E1107 17:19:14.683213    2020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.742454  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:15 kubernetes-upgrade-171701 kubelet[2036]: E1107 17:19:15.437870    2036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.743007  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2048]: E1107 17:19:16.185006    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.743614  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2063]: E1107 17:19:16.971632    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.744222  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:17 kubernetes-upgrade-171701 kubelet[2074]: E1107 17:19:17.684970    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.744842  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:18 kubernetes-upgrade-171701 kubelet[2089]: E1107 17:19:18.453084    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.745494  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2100]: E1107 17:19:19.184377    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.746118  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2115]: E1107 17:19:19.938763    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.746888  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:20 kubernetes-upgrade-171701 kubelet[2129]: E1107 17:19:20.695573    2129 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.747516  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:21 kubernetes-upgrade-171701 kubelet[2144]: E1107 17:19:21.441885    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.748151  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2156]: E1107 17:19:22.202488    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.748701  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2171]: E1107 17:19:22.941888    2171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.749287  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:23 kubernetes-upgrade-171701 kubelet[2184]: E1107 17:19:23.712349    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.749880  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:24 kubernetes-upgrade-171701 kubelet[2199]: E1107 17:19:24.449282    2199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.750531  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.751159  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.751606  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.752148  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.752740  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:28.752935  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:28.752985  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:19:28.753171  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:19:28.753194  209319 out.go:239]   Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.753202  209319 out.go:239]   Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.753210  209319 out.go:239]   Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.753217  209319 out.go:239]   Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:28.753225  209319 out.go:239]   Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:28.753268  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:28.753285  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:19:38.754479  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:39.243773  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:19:39.243835  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:19:39.268308  209319 cri.go:87] found id: ""
	I1107 17:19:39.268339  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.268348  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:19:39.268357  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:19:39.268435  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:19:39.295822  209319 cri.go:87] found id: ""
	I1107 17:19:39.295847  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.295853  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:19:39.295860  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:19:39.295908  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:19:39.324965  209319 cri.go:87] found id: ""
	I1107 17:19:39.324993  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.325000  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:19:39.325006  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:19:39.325057  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:19:39.349556  209319 cri.go:87] found id: ""
	I1107 17:19:39.349578  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.349586  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:19:39.349594  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:19:39.349643  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:19:39.372398  209319 cri.go:87] found id: ""
	I1107 17:19:39.372425  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.372433  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:19:39.372440  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:19:39.372479  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:19:39.396756  209319 cri.go:87] found id: ""
	I1107 17:19:39.396781  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.396790  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:19:39.396800  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:19:39.396860  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:19:39.422825  209319 cri.go:87] found id: ""
	I1107 17:19:39.422853  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.422863  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:19:39.422871  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:19:39.422926  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:19:39.469822  209319 cri.go:87] found id: ""
	I1107 17:19:39.469856  209319 logs.go:274] 0 containers: []
	W1107 17:19:39.469865  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:19:39.469876  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:19:39.469889  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:19:39.485794  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:19:39.485827  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:19:39.564821  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:19:39.564854  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:19:39.564870  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:19:39.603487  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:19:39.603540  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:19:39.644321  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:19:39.644350  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:19:39.662673  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:49 kubernetes-upgrade-171701 kubelet[1576]: E1107 17:18:49.931586    1576 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.663161  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:50 kubernetes-upgrade-171701 kubelet[1589]: E1107 17:18:50.687124    1589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.663564  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:51 kubernetes-upgrade-171701 kubelet[1605]: E1107 17:18:51.432212    1605 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.663935  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:52 kubernetes-upgrade-171701 kubelet[1618]: E1107 17:18:52.195003    1618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.664311  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:52 kubernetes-upgrade-171701 kubelet[1633]: E1107 17:18:52.950353    1633 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.664692  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:53 kubernetes-upgrade-171701 kubelet[1645]: E1107 17:18:53.691557    1645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.665231  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:54 kubernetes-upgrade-171701 kubelet[1660]: E1107 17:18:54.447293    1660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.665647  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:55 kubernetes-upgrade-171701 kubelet[1671]: E1107 17:18:55.188811    1671 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.666141  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:55 kubernetes-upgrade-171701 kubelet[1685]: E1107 17:18:55.950061    1685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.666761  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:56 kubernetes-upgrade-171701 kubelet[1697]: E1107 17:18:56.716712    1697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.667358  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:57 kubernetes-upgrade-171701 kubelet[1710]: E1107 17:18:57.445214    1710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.667960  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:58 kubernetes-upgrade-171701 kubelet[1721]: E1107 17:18:58.198358    1721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.668389  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:58 kubernetes-upgrade-171701 kubelet[1736]: E1107 17:18:58.935901    1736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.668750  209319 logs.go:138] Found kubelet problem: Nov 07 17:18:59 kubernetes-upgrade-171701 kubelet[1748]: E1107 17:18:59.700503    1748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.669097  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:00 kubernetes-upgrade-171701 kubelet[1762]: E1107 17:19:00.450961    1762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.669467  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1774]: E1107 17:19:01.211217    1774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.669819  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1788]: E1107 17:19:01.951888    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.670170  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:02 kubernetes-upgrade-171701 kubelet[1799]: E1107 17:19:02.695941    1799 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.670614  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:03 kubernetes-upgrade-171701 kubelet[1813]: E1107 17:19:03.466174    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.670968  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1827]: E1107 17:19:04.192476    1827 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.671322  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1843]: E1107 17:19:04.950039    1843 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.671676  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:05 kubernetes-upgrade-171701 kubelet[1854]: E1107 17:19:05.690960    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.672026  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:06 kubernetes-upgrade-171701 kubelet[1869]: E1107 17:19:06.447852    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.672388  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1882]: E1107 17:19:07.181778    1882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.672777  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1898]: E1107 17:19:07.932726    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.673150  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:08 kubernetes-upgrade-171701 kubelet[1911]: E1107 17:19:08.696361    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.673520  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:09 kubernetes-upgrade-171701 kubelet[1926]: E1107 17:19:09.445104    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.673902  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1938]: E1107 17:19:10.191469    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.674275  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1954]: E1107 17:19:10.969721    1954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.674659  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:11 kubernetes-upgrade-171701 kubelet[1967]: E1107 17:19:11.691353    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.675033  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:12 kubernetes-upgrade-171701 kubelet[1981]: E1107 17:19:12.449542    1981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.675421  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[1993]: E1107 17:19:13.193306    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.675803  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[2007]: E1107 17:19:13.940859    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.676184  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:14 kubernetes-upgrade-171701 kubelet[2020]: E1107 17:19:14.683213    2020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.676557  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:15 kubernetes-upgrade-171701 kubelet[2036]: E1107 17:19:15.437870    2036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.676935  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2048]: E1107 17:19:16.185006    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.677307  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2063]: E1107 17:19:16.971632    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.677687  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:17 kubernetes-upgrade-171701 kubelet[2074]: E1107 17:19:17.684970    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.678057  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:18 kubernetes-upgrade-171701 kubelet[2089]: E1107 17:19:18.453084    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.678487  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2100]: E1107 17:19:19.184377    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.678905  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2115]: E1107 17:19:19.938763    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.679278  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:20 kubernetes-upgrade-171701 kubelet[2129]: E1107 17:19:20.695573    2129 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.679659  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:21 kubernetes-upgrade-171701 kubelet[2144]: E1107 17:19:21.441885    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.680033  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2156]: E1107 17:19:22.202488    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.680406  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2171]: E1107 17:19:22.941888    2171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.680844  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:23 kubernetes-upgrade-171701 kubelet[2184]: E1107 17:19:23.712349    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.681263  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:24 kubernetes-upgrade-171701 kubelet[2199]: E1107 17:19:24.449282    2199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.681642  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.682017  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.682450  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.682872  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.683256  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.683604  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2409]: E1107 17:19:28.940418    2409 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.683960  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:29 kubernetes-upgrade-171701 kubelet[2419]: E1107 17:19:29.685113    2419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.684338  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:30 kubernetes-upgrade-171701 kubelet[2431]: E1107 17:19:30.441485    2431 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.684712  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2442]: E1107 17:19:31.190348    2442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.685063  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2452]: E1107 17:19:31.937101    2452 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.685427  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:32 kubernetes-upgrade-171701 kubelet[2463]: E1107 17:19:32.701622    2463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.685967  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:33 kubernetes-upgrade-171701 kubelet[2474]: E1107 17:19:33.446923    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.686474  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2484]: E1107 17:19:34.193725    2484 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.686840  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2495]: E1107 17:19:34.935384    2495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.687189  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:35 kubernetes-upgrade-171701 kubelet[2506]: E1107 17:19:35.693701    2506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.687534  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.687883  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.688235  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.688587  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.688937  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:39.689078  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:39.689093  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:19:39.689227  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:19:39.689241  209319 out.go:239]   Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.689246  209319 out.go:239]   Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.689251  209319 out.go:239]   Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.689257  209319 out.go:239]   Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:39.689264  209319 out.go:239]   Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:39.689269  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:39.689278  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:19:49.689901  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:19:49.744765  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:19:49.744856  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:19:49.771790  209319 cri.go:87] found id: ""
	I1107 17:19:49.771821  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.771831  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:19:49.771840  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:19:49.771915  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:19:49.796835  209319 cri.go:87] found id: ""
	I1107 17:19:49.796865  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.796873  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:19:49.796881  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:19:49.796935  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:19:49.824565  209319 cri.go:87] found id: ""
	I1107 17:19:49.824593  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.824600  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:19:49.824606  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:19:49.824649  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:19:49.851306  209319 cri.go:87] found id: ""
	I1107 17:19:49.851343  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.851352  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:19:49.851361  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:19:49.851415  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:19:49.878098  209319 cri.go:87] found id: ""
	I1107 17:19:49.878122  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.878128  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:19:49.878135  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:19:49.878178  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:19:49.904500  209319 cri.go:87] found id: ""
	I1107 17:19:49.904539  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.904546  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:19:49.904552  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:19:49.904608  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:19:49.939121  209319 cri.go:87] found id: ""
	I1107 17:19:49.939153  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.939162  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:19:49.939171  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:19:49.939229  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:19:49.967609  209319 cri.go:87] found id: ""
	I1107 17:19:49.967645  209319 logs.go:274] 0 containers: []
	W1107 17:19:49.967655  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:19:49.967670  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:19:49.967684  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:19:49.994455  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:19:49.994480  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:19:50.021020  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:00 kubernetes-upgrade-171701 kubelet[1762]: E1107 17:19:00.450961    1762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.021652  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1774]: E1107 17:19:01.211217    1774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.022252  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:01 kubernetes-upgrade-171701 kubelet[1788]: E1107 17:19:01.951888    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.022911  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:02 kubernetes-upgrade-171701 kubelet[1799]: E1107 17:19:02.695941    1799 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.023535  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:03 kubernetes-upgrade-171701 kubelet[1813]: E1107 17:19:03.466174    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.024154  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1827]: E1107 17:19:04.192476    1827 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.024778  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:04 kubernetes-upgrade-171701 kubelet[1843]: E1107 17:19:04.950039    1843 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.025395  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:05 kubernetes-upgrade-171701 kubelet[1854]: E1107 17:19:05.690960    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.026008  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:06 kubernetes-upgrade-171701 kubelet[1869]: E1107 17:19:06.447852    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.026630  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1882]: E1107 17:19:07.181778    1882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.027081  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:07 kubernetes-upgrade-171701 kubelet[1898]: E1107 17:19:07.932726    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.027555  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:08 kubernetes-upgrade-171701 kubelet[1911]: E1107 17:19:08.696361    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.028051  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:09 kubernetes-upgrade-171701 kubelet[1926]: E1107 17:19:09.445104    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.028537  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1938]: E1107 17:19:10.191469    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.028979  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1954]: E1107 17:19:10.969721    1954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.029507  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:11 kubernetes-upgrade-171701 kubelet[1967]: E1107 17:19:11.691353    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.030122  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:12 kubernetes-upgrade-171701 kubelet[1981]: E1107 17:19:12.449542    1981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.030901  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[1993]: E1107 17:19:13.193306    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.031553  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[2007]: E1107 17:19:13.940859    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.032210  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:14 kubernetes-upgrade-171701 kubelet[2020]: E1107 17:19:14.683213    2020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.032864  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:15 kubernetes-upgrade-171701 kubelet[2036]: E1107 17:19:15.437870    2036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.033499  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2048]: E1107 17:19:16.185006    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.034150  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2063]: E1107 17:19:16.971632    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.034849  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:17 kubernetes-upgrade-171701 kubelet[2074]: E1107 17:19:17.684970    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.035505  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:18 kubernetes-upgrade-171701 kubelet[2089]: E1107 17:19:18.453084    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.036171  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2100]: E1107 17:19:19.184377    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.036834  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2115]: E1107 17:19:19.938763    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.037454  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:20 kubernetes-upgrade-171701 kubelet[2129]: E1107 17:19:20.695573    2129 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.038066  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:21 kubernetes-upgrade-171701 kubelet[2144]: E1107 17:19:21.441885    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.038791  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2156]: E1107 17:19:22.202488    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.039420  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2171]: E1107 17:19:22.941888    2171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.040056  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:23 kubernetes-upgrade-171701 kubelet[2184]: E1107 17:19:23.712349    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.040520  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:24 kubernetes-upgrade-171701 kubelet[2199]: E1107 17:19:24.449282    2199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.040914  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.041402  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.042037  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.042464  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.042853  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.043430  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2409]: E1107 17:19:28.940418    2409 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.044082  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:29 kubernetes-upgrade-171701 kubelet[2419]: E1107 17:19:29.685113    2419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.044689  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:30 kubernetes-upgrade-171701 kubelet[2431]: E1107 17:19:30.441485    2431 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.045285  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2442]: E1107 17:19:31.190348    2442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.045750  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2452]: E1107 17:19:31.937101    2452 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.046385  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:32 kubernetes-upgrade-171701 kubelet[2463]: E1107 17:19:32.701622    2463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.047008  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:33 kubernetes-upgrade-171701 kubelet[2474]: E1107 17:19:33.446923    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.047668  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2484]: E1107 17:19:34.193725    2484 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.048327  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2495]: E1107 17:19:34.935384    2495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.048991  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:35 kubernetes-upgrade-171701 kubelet[2506]: E1107 17:19:35.693701    2506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.049631  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.050235  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.050936  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.051497  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.052112  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.052765  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2699]: E1107 17:19:40.191262    2699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.053417  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2710]: E1107 17:19:40.940040    2710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.054098  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:41 kubernetes-upgrade-171701 kubelet[2721]: E1107 17:19:41.692290    2721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.054788  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:42 kubernetes-upgrade-171701 kubelet[2732]: E1107 17:19:42.444170    2732 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.055218  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2742]: E1107 17:19:43.184160    2742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.055613  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2753]: E1107 17:19:43.945445    2753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.055999  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:44 kubernetes-upgrade-171701 kubelet[2764]: E1107 17:19:44.694894    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.056416  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:45 kubernetes-upgrade-171701 kubelet[2774]: E1107 17:19:45.442013    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.056789  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2785]: E1107 17:19:46.195987    2785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.057148  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.057510  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.057879  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.058357  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.058954  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:50.059161  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:19:50.059190  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:19:50.083604  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:19:50.083699  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:19:50.160851  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:19:50.160879  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:19:50.160894  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:19:50.216882  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:50.216994  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:19:50.217221  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:19:50.217244  209319 out.go:239]   Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.217253  209319 out.go:239]   Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.217297  209319 out.go:239]   Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.217326  209319 out.go:239]   Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:19:50.217351  209319 out.go:239]   Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:19:50.217380  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:19:50.217400  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:00.219359  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:00.243834  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:00.243922  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:00.273034  209319 cri.go:87] found id: ""
	I1107 17:20:00.273064  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.273074  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:00.273083  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:00.273163  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:00.304481  209319 cri.go:87] found id: ""
	I1107 17:20:00.304515  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.304523  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:00.304533  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:00.304592  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:00.335528  209319 cri.go:87] found id: ""
	I1107 17:20:00.335560  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.335569  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:00.335578  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:00.335635  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:00.370699  209319 cri.go:87] found id: ""
	I1107 17:20:00.370727  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.370736  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:00.370745  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:00.370796  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:00.404030  209319 cri.go:87] found id: ""
	I1107 17:20:00.404061  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.404070  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:00.404079  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:00.404148  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:00.437121  209319 cri.go:87] found id: ""
	I1107 17:20:00.437158  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.437167  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:00.437176  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:00.437233  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:00.466081  209319 cri.go:87] found id: ""
	I1107 17:20:00.466113  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.466122  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:00.466130  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:00.466178  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:00.488296  209319 cri.go:87] found id: ""
	I1107 17:20:00.488321  209319 logs.go:274] 0 containers: []
	W1107 17:20:00.488327  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:00.488336  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:00.488348  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:00.557226  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:00.557256  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:00.557272  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:00.594127  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:00.594160  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:00.628721  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:00.628761  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:00.649408  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:10 kubernetes-upgrade-171701 kubelet[1954]: E1107 17:19:10.969721    1954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.650049  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:11 kubernetes-upgrade-171701 kubelet[1967]: E1107 17:19:11.691353    1967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.650706  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:12 kubernetes-upgrade-171701 kubelet[1981]: E1107 17:19:12.449542    1981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.651136  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[1993]: E1107 17:19:13.193306    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.651490  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:13 kubernetes-upgrade-171701 kubelet[2007]: E1107 17:19:13.940859    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.651850  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:14 kubernetes-upgrade-171701 kubelet[2020]: E1107 17:19:14.683213    2020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.652221  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:15 kubernetes-upgrade-171701 kubelet[2036]: E1107 17:19:15.437870    2036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.652582  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2048]: E1107 17:19:16.185006    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.652937  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:16 kubernetes-upgrade-171701 kubelet[2063]: E1107 17:19:16.971632    2063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.653299  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:17 kubernetes-upgrade-171701 kubelet[2074]: E1107 17:19:17.684970    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.653654  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:18 kubernetes-upgrade-171701 kubelet[2089]: E1107 17:19:18.453084    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.654154  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2100]: E1107 17:19:19.184377    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.654782  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:19 kubernetes-upgrade-171701 kubelet[2115]: E1107 17:19:19.938763    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.655314  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:20 kubernetes-upgrade-171701 kubelet[2129]: E1107 17:19:20.695573    2129 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.655884  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:21 kubernetes-upgrade-171701 kubelet[2144]: E1107 17:19:21.441885    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.656303  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2156]: E1107 17:19:22.202488    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.656797  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2171]: E1107 17:19:22.941888    2171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.657428  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:23 kubernetes-upgrade-171701 kubelet[2184]: E1107 17:19:23.712349    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.658032  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:24 kubernetes-upgrade-171701 kubelet[2199]: E1107 17:19:24.449282    2199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.658644  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.659258  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.659879  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.660496  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.660924  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.661284  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2409]: E1107 17:19:28.940418    2409 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.661661  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:29 kubernetes-upgrade-171701 kubelet[2419]: E1107 17:19:29.685113    2419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.662026  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:30 kubernetes-upgrade-171701 kubelet[2431]: E1107 17:19:30.441485    2431 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.662483  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2442]: E1107 17:19:31.190348    2442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.662846  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2452]: E1107 17:19:31.937101    2452 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.663206  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:32 kubernetes-upgrade-171701 kubelet[2463]: E1107 17:19:32.701622    2463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.663565  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:33 kubernetes-upgrade-171701 kubelet[2474]: E1107 17:19:33.446923    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.663938  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2484]: E1107 17:19:34.193725    2484 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.664302  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2495]: E1107 17:19:34.935384    2495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.664662  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:35 kubernetes-upgrade-171701 kubelet[2506]: E1107 17:19:35.693701    2506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.665018  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.665378  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.665737  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.666118  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.666505  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.666861  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2699]: E1107 17:19:40.191262    2699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.667214  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2710]: E1107 17:19:40.940040    2710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.667574  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:41 kubernetes-upgrade-171701 kubelet[2721]: E1107 17:19:41.692290    2721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.667932  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:42 kubernetes-upgrade-171701 kubelet[2732]: E1107 17:19:42.444170    2732 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.668291  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2742]: E1107 17:19:43.184160    2742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.668658  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2753]: E1107 17:19:43.945445    2753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.669017  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:44 kubernetes-upgrade-171701 kubelet[2764]: E1107 17:19:44.694894    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.669395  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:45 kubernetes-upgrade-171701 kubelet[2774]: E1107 17:19:45.442013    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.669749  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2785]: E1107 17:19:46.195987    2785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.670108  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.670474  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.670829  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.671187  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.671534  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.671881  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:50 kubernetes-upgrade-171701 kubelet[2987]: E1107 17:19:50.687852    2987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.672232  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:51 kubernetes-upgrade-171701 kubelet[2997]: E1107 17:19:51.436727    2997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.672589  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3008]: E1107 17:19:52.189944    3008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.672959  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3018]: E1107 17:19:52.932957    3018 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.673314  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:53 kubernetes-upgrade-171701 kubelet[3029]: E1107 17:19:53.705204    3029 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.673668  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:54 kubernetes-upgrade-171701 kubelet[3040]: E1107 17:19:54.431405    3040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.674070  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3051]: E1107 17:19:55.193359    3051 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.674481  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3061]: E1107 17:19:55.941035    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.674841  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:56 kubernetes-upgrade-171701 kubelet[3071]: E1107 17:19:56.696082    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.675208  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.675568  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.675915  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.676308  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.676686  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:00.676844  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:00.676871  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:00.693132  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:00.693155  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:00.693266  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:00.693281  209319 out.go:239]   Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.693291  209319 out.go:239]   Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.693304  209319 out.go:239]   Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.693320  209319 out.go:239]   Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:00.693331  209319 out.go:239]   Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:00.693341  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:00.693350  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:10.694474  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:10.744100  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:10.744162  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:10.767787  209319 cri.go:87] found id: ""
	I1107 17:20:10.767817  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.767826  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:10.767834  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:10.767881  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:10.791485  209319 cri.go:87] found id: ""
	I1107 17:20:10.791514  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.791521  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:10.791528  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:10.791584  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:10.815750  209319 cri.go:87] found id: ""
	I1107 17:20:10.815780  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.815789  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:10.815796  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:10.815850  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:10.838387  209319 cri.go:87] found id: ""
	I1107 17:20:10.838416  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.838422  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:10.838429  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:10.838475  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:10.862870  209319 cri.go:87] found id: ""
	I1107 17:20:10.862900  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.862909  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:10.862917  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:10.862980  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:10.885345  209319 cri.go:87] found id: ""
	I1107 17:20:10.885368  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.885375  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:10.885385  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:10.885425  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:10.911312  209319 cri.go:87] found id: ""
	I1107 17:20:10.911339  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.911348  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:10.911357  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:10.911408  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:10.937103  209319 cri.go:87] found id: ""
	I1107 17:20:10.937150  209319 logs.go:274] 0 containers: []
	W1107 17:20:10.937161  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:10.937175  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:10.937197  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:10.956547  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:21 kubernetes-upgrade-171701 kubelet[2144]: E1107 17:19:21.441885    2144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.956915  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2156]: E1107 17:19:22.202488    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.957306  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:22 kubernetes-upgrade-171701 kubelet[2171]: E1107 17:19:22.941888    2171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.957660  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:23 kubernetes-upgrade-171701 kubelet[2184]: E1107 17:19:23.712349    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.958007  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:24 kubernetes-upgrade-171701 kubelet[2199]: E1107 17:19:24.449282    2199 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.958438  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2211]: E1107 17:19:25.183460    2211 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.958798  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:25 kubernetes-upgrade-171701 kubelet[2226]: E1107 17:19:25.933938    2226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.959142  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:26 kubernetes-upgrade-171701 kubelet[2239]: E1107 17:19:26.691045    2239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.959521  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:27 kubernetes-upgrade-171701 kubelet[2254]: E1107 17:19:27.433159    2254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.959869  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2267]: E1107 17:19:28.192383    2267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.960248  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:28 kubernetes-upgrade-171701 kubelet[2409]: E1107 17:19:28.940418    2409 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.960657  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:29 kubernetes-upgrade-171701 kubelet[2419]: E1107 17:19:29.685113    2419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.961035  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:30 kubernetes-upgrade-171701 kubelet[2431]: E1107 17:19:30.441485    2431 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.961401  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2442]: E1107 17:19:31.190348    2442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.961755  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2452]: E1107 17:19:31.937101    2452 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.962125  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:32 kubernetes-upgrade-171701 kubelet[2463]: E1107 17:19:32.701622    2463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.962501  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:33 kubernetes-upgrade-171701 kubelet[2474]: E1107 17:19:33.446923    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.962856  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2484]: E1107 17:19:34.193725    2484 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.963209  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2495]: E1107 17:19:34.935384    2495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.963558  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:35 kubernetes-upgrade-171701 kubelet[2506]: E1107 17:19:35.693701    2506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.963914  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.964276  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.964644  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.964998  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.965371  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.965722  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2699]: E1107 17:19:40.191262    2699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.966091  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2710]: E1107 17:19:40.940040    2710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.967225  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:41 kubernetes-upgrade-171701 kubelet[2721]: E1107 17:19:41.692290    2721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.967590  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:42 kubernetes-upgrade-171701 kubelet[2732]: E1107 17:19:42.444170    2732 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.967939  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2742]: E1107 17:19:43.184160    2742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.968294  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2753]: E1107 17:19:43.945445    2753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.968643  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:44 kubernetes-upgrade-171701 kubelet[2764]: E1107 17:19:44.694894    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.968996  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:45 kubernetes-upgrade-171701 kubelet[2774]: E1107 17:19:45.442013    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.969361  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2785]: E1107 17:19:46.195987    2785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.969711  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.970067  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.970438  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.970797  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.971152  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.971505  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:50 kubernetes-upgrade-171701 kubelet[2987]: E1107 17:19:50.687852    2987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.971854  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:51 kubernetes-upgrade-171701 kubelet[2997]: E1107 17:19:51.436727    2997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.972201  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3008]: E1107 17:19:52.189944    3008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.972552  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3018]: E1107 17:19:52.932957    3018 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.972900  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:53 kubernetes-upgrade-171701 kubelet[3029]: E1107 17:19:53.705204    3029 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.973258  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:54 kubernetes-upgrade-171701 kubelet[3040]: E1107 17:19:54.431405    3040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.973609  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3051]: E1107 17:19:55.193359    3051 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.973959  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3061]: E1107 17:19:55.941035    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.974307  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:56 kubernetes-upgrade-171701 kubelet[3071]: E1107 17:19:56.696082    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.974693  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.975046  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.975394  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.975748  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.976102  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.976451  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3269]: E1107 17:20:01.185740    3269 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.976798  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3280]: E1107 17:20:01.946249    3280 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.977148  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:02 kubernetes-upgrade-171701 kubelet[3291]: E1107 17:20:02.692964    3291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.977495  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:03 kubernetes-upgrade-171701 kubelet[3302]: E1107 17:20:03.435122    3302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.977856  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3313]: E1107 17:20:04.189750    3313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.978247  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3325]: E1107 17:20:04.930640    3325 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.978625  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:05 kubernetes-upgrade-171701 kubelet[3336]: E1107 17:20:05.684994    3336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.978977  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:06 kubernetes-upgrade-171701 kubelet[3347]: E1107 17:20:06.431306    3347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.979327  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3358]: E1107 17:20:07.185085    3358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.979679  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.980035  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.980382  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.980737  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:10.981111  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:10.981233  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:10.981251  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:10.997593  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:10.997619  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:11.052933  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:11.052958  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:11.052974  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:11.087313  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:11.087347  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:11.112928  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:11.112955  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:11.113058  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:11.113075  209319 out.go:239]   Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:11.113084  209319 out.go:239]   Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:11.113093  209319 out.go:239]   Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:11.113099  209319 out.go:239]   Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:11.113106  209319 out.go:239]   Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:11.113111  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:11.113118  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:21.114235  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:21.244465  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:21.244534  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:21.270354  209319 cri.go:87] found id: ""
	I1107 17:20:21.270381  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.270387  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:21.270394  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:21.270446  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:21.293204  209319 cri.go:87] found id: ""
	I1107 17:20:21.293232  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.293239  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:21.293245  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:21.293286  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:21.316567  209319 cri.go:87] found id: ""
	I1107 17:20:21.316598  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.316607  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:21.316615  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:21.316674  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:21.341001  209319 cri.go:87] found id: ""
	I1107 17:20:21.341026  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.341034  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:21.341043  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:21.341098  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:21.366268  209319 cri.go:87] found id: ""
	I1107 17:20:21.366295  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.366302  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:21.366351  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:21.366416  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:21.390867  209319 cri.go:87] found id: ""
	I1107 17:20:21.390891  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.390899  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:21.390905  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:21.390946  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:21.415991  209319 cri.go:87] found id: ""
	I1107 17:20:21.416023  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.416032  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:21.416041  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:21.416098  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:21.445225  209319 cri.go:87] found id: ""
	I1107 17:20:21.445278  209319 logs.go:274] 0 containers: []
	W1107 17:20:21.445287  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:21.445298  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:21.445310  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:21.461147  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:31 kubernetes-upgrade-171701 kubelet[2452]: E1107 17:19:31.937101    2452 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.461609  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:32 kubernetes-upgrade-171701 kubelet[2463]: E1107 17:19:32.701622    2463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.461994  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:33 kubernetes-upgrade-171701 kubelet[2474]: E1107 17:19:33.446923    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.462431  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2484]: E1107 17:19:34.193725    2484 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.462872  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:34 kubernetes-upgrade-171701 kubelet[2495]: E1107 17:19:34.935384    2495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.463264  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:35 kubernetes-upgrade-171701 kubelet[2506]: E1107 17:19:35.693701    2506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.463658  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:36 kubernetes-upgrade-171701 kubelet[2517]: E1107 17:19:36.444613    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.464163  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2526]: E1107 17:19:37.191232    2526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.464742  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:37 kubernetes-upgrade-171701 kubelet[2537]: E1107 17:19:37.940486    2537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.465213  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:38 kubernetes-upgrade-171701 kubelet[2547]: E1107 17:19:38.688575    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.465588  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:39 kubernetes-upgrade-171701 kubelet[2626]: E1107 17:19:39.468932    2626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.465960  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2699]: E1107 17:19:40.191262    2699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.466358  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:40 kubernetes-upgrade-171701 kubelet[2710]: E1107 17:19:40.940040    2710 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.466715  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:41 kubernetes-upgrade-171701 kubelet[2721]: E1107 17:19:41.692290    2721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.467076  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:42 kubernetes-upgrade-171701 kubelet[2732]: E1107 17:19:42.444170    2732 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.467443  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2742]: E1107 17:19:43.184160    2742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.467814  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2753]: E1107 17:19:43.945445    2753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.468184  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:44 kubernetes-upgrade-171701 kubelet[2764]: E1107 17:19:44.694894    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.468549  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:45 kubernetes-upgrade-171701 kubelet[2774]: E1107 17:19:45.442013    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.468912  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2785]: E1107 17:19:46.195987    2785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.469283  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.469667  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.470030  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.470404  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.470769  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.471146  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:50 kubernetes-upgrade-171701 kubelet[2987]: E1107 17:19:50.687852    2987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.471511  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:51 kubernetes-upgrade-171701 kubelet[2997]: E1107 17:19:51.436727    2997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.471875  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3008]: E1107 17:19:52.189944    3008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.472239  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3018]: E1107 17:19:52.932957    3018 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.472601  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:53 kubernetes-upgrade-171701 kubelet[3029]: E1107 17:19:53.705204    3029 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.472960  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:54 kubernetes-upgrade-171701 kubelet[3040]: E1107 17:19:54.431405    3040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.473324  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3051]: E1107 17:19:55.193359    3051 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.473690  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3061]: E1107 17:19:55.941035    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.474051  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:56 kubernetes-upgrade-171701 kubelet[3071]: E1107 17:19:56.696082    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.474443  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.474804  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.475169  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.475563  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.475925  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.476298  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3269]: E1107 17:20:01.185740    3269 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.476674  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3280]: E1107 17:20:01.946249    3280 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.477067  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:02 kubernetes-upgrade-171701 kubelet[3291]: E1107 17:20:02.692964    3291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.477428  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:03 kubernetes-upgrade-171701 kubelet[3302]: E1107 17:20:03.435122    3302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.477793  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3313]: E1107 17:20:04.189750    3313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.478184  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3325]: E1107 17:20:04.930640    3325 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.478581  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:05 kubernetes-upgrade-171701 kubelet[3336]: E1107 17:20:05.684994    3336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.478948  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:06 kubernetes-upgrade-171701 kubelet[3347]: E1107 17:20:06.431306    3347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.479322  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3358]: E1107 17:20:07.185085    3358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.479683  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.480046  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.480414  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.480790  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.481237  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.481847  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:11 kubernetes-upgrade-171701 kubelet[3559]: E1107 17:20:11.690274    3559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.482285  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:12 kubernetes-upgrade-171701 kubelet[3569]: E1107 17:20:12.434698    3569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.482689  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3582]: E1107 17:20:13.189943    3582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.483092  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3593]: E1107 17:20:13.939588    3593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.483473  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:14 kubernetes-upgrade-171701 kubelet[3604]: E1107 17:20:14.694373    3604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.483857  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:15 kubernetes-upgrade-171701 kubelet[3614]: E1107 17:20:15.435678    3614 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.484263  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3625]: E1107 17:20:16.198070    3625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.484650  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3635]: E1107 17:20:16.931982    3635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.485044  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:17 kubernetes-upgrade-171701 kubelet[3646]: E1107 17:20:17.688072    3646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.485425  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.485831  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.486237  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.486668  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.487064  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:21.487210  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:21.487226  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:21.504465  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:21.504504  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:21.563186  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:21.563209  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:21.563219  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:21.600284  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:21.600326  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:21.626653  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:21.626682  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:21.626804  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:21.626820  209319 out.go:239]   Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.626828  209319 out.go:239]   Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.626838  209319 out.go:239]   Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.626846  209319 out.go:239]   Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:21.626858  209319 out.go:239]   Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:21.626867  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:21.626876  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:31.627257  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:31.744304  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:31.744372  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:31.768121  209319 cri.go:87] found id: ""
	I1107 17:20:31.768144  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.768150  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:31.768157  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:31.768201  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:31.792213  209319 cri.go:87] found id: ""
	I1107 17:20:31.792240  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.792246  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:31.792251  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:31.792294  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:31.814617  209319 cri.go:87] found id: ""
	I1107 17:20:31.814643  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.814654  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:31.814660  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:31.814702  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:31.837262  209319 cri.go:87] found id: ""
	I1107 17:20:31.837286  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.837292  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:31.837298  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:31.837346  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:31.861214  209319 cri.go:87] found id: ""
	I1107 17:20:31.861243  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.861252  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:31.861261  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:31.861305  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:31.884496  209319 cri.go:87] found id: ""
	I1107 17:20:31.884523  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.884530  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:31.884537  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:31.884588  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:31.908352  209319 cri.go:87] found id: ""
	I1107 17:20:31.908379  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.908385  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:31.908392  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:31.908431  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:31.933610  209319 cri.go:87] found id: ""
	I1107 17:20:31.933637  209319 logs.go:274] 0 containers: []
	W1107 17:20:31.933644  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:31.933657  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:31.933672  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:31.970150  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:31.970189  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:31.996884  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:31.996916  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:32.012851  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:42 kubernetes-upgrade-171701 kubelet[2732]: E1107 17:19:42.444170    2732 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.013250  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2742]: E1107 17:19:43.184160    2742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.013689  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:43 kubernetes-upgrade-171701 kubelet[2753]: E1107 17:19:43.945445    2753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.014056  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:44 kubernetes-upgrade-171701 kubelet[2764]: E1107 17:19:44.694894    2764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.014504  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:45 kubernetes-upgrade-171701 kubelet[2774]: E1107 17:19:45.442013    2774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.014961  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2785]: E1107 17:19:46.195987    2785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.015342  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:46 kubernetes-upgrade-171701 kubelet[2796]: E1107 17:19:46.937843    2796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.015724  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:47 kubernetes-upgrade-171701 kubelet[2806]: E1107 17:19:47.697895    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.016086  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:48 kubernetes-upgrade-171701 kubelet[2817]: E1107 17:19:48.438431    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.016483  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2828]: E1107 17:19:49.200878    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.016835  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:49 kubernetes-upgrade-171701 kubelet[2906]: E1107 17:19:49.947875    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.017298  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:50 kubernetes-upgrade-171701 kubelet[2987]: E1107 17:19:50.687852    2987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.017825  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:51 kubernetes-upgrade-171701 kubelet[2997]: E1107 17:19:51.436727    2997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.018373  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3008]: E1107 17:19:52.189944    3008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.018997  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3018]: E1107 17:19:52.932957    3018 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.019504  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:53 kubernetes-upgrade-171701 kubelet[3029]: E1107 17:19:53.705204    3029 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.019860  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:54 kubernetes-upgrade-171701 kubelet[3040]: E1107 17:19:54.431405    3040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.020213  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3051]: E1107 17:19:55.193359    3051 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.020556  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3061]: E1107 17:19:55.941035    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.020912  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:56 kubernetes-upgrade-171701 kubelet[3071]: E1107 17:19:56.696082    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.021263  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.021613  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.021966  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.022359  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.022707  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.023049  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3269]: E1107 17:20:01.185740    3269 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.023472  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3280]: E1107 17:20:01.946249    3280 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.023998  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:02 kubernetes-upgrade-171701 kubelet[3291]: E1107 17:20:02.692964    3291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.024512  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:03 kubernetes-upgrade-171701 kubelet[3302]: E1107 17:20:03.435122    3302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.024876  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3313]: E1107 17:20:04.189750    3313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.025244  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3325]: E1107 17:20:04.930640    3325 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.025712  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:05 kubernetes-upgrade-171701 kubelet[3336]: E1107 17:20:05.684994    3336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.026141  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:06 kubernetes-upgrade-171701 kubelet[3347]: E1107 17:20:06.431306    3347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.026636  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3358]: E1107 17:20:07.185085    3358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.027055  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.027407  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.027768  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.028122  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.028475  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.028821  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:11 kubernetes-upgrade-171701 kubelet[3559]: E1107 17:20:11.690274    3559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.029172  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:12 kubernetes-upgrade-171701 kubelet[3569]: E1107 17:20:12.434698    3569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.029539  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3582]: E1107 17:20:13.189943    3582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.029890  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3593]: E1107 17:20:13.939588    3593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.030240  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:14 kubernetes-upgrade-171701 kubelet[3604]: E1107 17:20:14.694373    3604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.030613  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:15 kubernetes-upgrade-171701 kubelet[3614]: E1107 17:20:15.435678    3614 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.030968  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3625]: E1107 17:20:16.198070    3625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.031310  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3635]: E1107 17:20:16.931982    3635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.031665  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:17 kubernetes-upgrade-171701 kubelet[3646]: E1107 17:20:17.688072    3646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.032005  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.032363  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.032727  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.033076  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.033426  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.033780  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3845]: E1107 17:20:22.193456    3845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.034142  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3856]: E1107 17:20:22.931800    3856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.034566  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:23 kubernetes-upgrade-171701 kubelet[3867]: E1107 17:20:23.681518    3867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.034912  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:24 kubernetes-upgrade-171701 kubelet[3878]: E1107 17:20:24.429732    3878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.035278  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3889]: E1107 17:20:25.188526    3889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.035784  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3900]: E1107 17:20:25.933166    3900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.036175  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:26 kubernetes-upgrade-171701 kubelet[3912]: E1107 17:20:26.682430    3912 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.036527  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:27 kubernetes-upgrade-171701 kubelet[3924]: E1107 17:20:27.433237    3924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.036866  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3935]: E1107 17:20:28.180959    3935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.037217  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.037573  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.037926  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.038279  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.038648  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:32.038767  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:32.038784  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:32.054552  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:32.054574  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:32.108804  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:32.108836  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:32.108848  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:32.108983  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:32.109001  209319 out.go:239]   Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.109009  209319 out.go:239]   Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.109026  209319 out.go:239]   Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.109039  209319 out.go:239]   Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:32.109051  209319 out.go:239]   Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:32.109061  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:32.109069  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:42.110027  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:42.244117  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:42.244201  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:42.267264  209319 cri.go:87] found id: ""
	I1107 17:20:42.267289  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.267296  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:42.267302  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:42.267353  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:42.289689  209319 cri.go:87] found id: ""
	I1107 17:20:42.289716  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.289725  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:42.289733  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:42.289784  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:42.312878  209319 cri.go:87] found id: ""
	I1107 17:20:42.312903  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.312909  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:42.312916  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:42.312972  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:42.335564  209319 cri.go:87] found id: ""
	I1107 17:20:42.335595  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.335603  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:42.335611  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:42.335653  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:42.357576  209319 cri.go:87] found id: ""
	I1107 17:20:42.357605  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.357614  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:42.357622  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:42.357685  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:42.380045  209319 cri.go:87] found id: ""
	I1107 17:20:42.380073  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.380083  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:42.380093  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:42.380150  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:42.403692  209319 cri.go:87] found id: ""
	I1107 17:20:42.403721  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.403732  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:42.403741  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:42.403794  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:42.431304  209319 cri.go:87] found id: ""
	I1107 17:20:42.431334  209319 logs.go:274] 0 containers: []
	W1107 17:20:42.431343  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:42.431356  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:42.431379  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:42.448475  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:42.448513  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:42.505667  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:42.505697  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:42.505707  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:42.540437  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:42.540475  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:42.567028  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:42.567060  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:42.583883  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:52 kubernetes-upgrade-171701 kubelet[3018]: E1107 17:19:52.932957    3018 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.584254  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:53 kubernetes-upgrade-171701 kubelet[3029]: E1107 17:19:53.705204    3029 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.584605  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:54 kubernetes-upgrade-171701 kubelet[3040]: E1107 17:19:54.431405    3040 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.584983  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3051]: E1107 17:19:55.193359    3051 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.585374  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:55 kubernetes-upgrade-171701 kubelet[3061]: E1107 17:19:55.941035    3061 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.585751  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:56 kubernetes-upgrade-171701 kubelet[3071]: E1107 17:19:56.696082    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.586128  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:57 kubernetes-upgrade-171701 kubelet[3081]: E1107 17:19:57.444814    3081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.586532  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3092]: E1107 17:19:58.201611    3092 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.586909  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:58 kubernetes-upgrade-171701 kubelet[3102]: E1107 17:19:58.932200    3102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.587300  209319 logs.go:138] Found kubelet problem: Nov 07 17:19:59 kubernetes-upgrade-171701 kubelet[3113]: E1107 17:19:59.692631    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.587674  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:00 kubernetes-upgrade-171701 kubelet[3182]: E1107 17:20:00.460592    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.588059  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3269]: E1107 17:20:01.185740    3269 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.588441  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:01 kubernetes-upgrade-171701 kubelet[3280]: E1107 17:20:01.946249    3280 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.588817  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:02 kubernetes-upgrade-171701 kubelet[3291]: E1107 17:20:02.692964    3291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.589226  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:03 kubernetes-upgrade-171701 kubelet[3302]: E1107 17:20:03.435122    3302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.589637  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3313]: E1107 17:20:04.189750    3313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.590017  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3325]: E1107 17:20:04.930640    3325 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.590443  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:05 kubernetes-upgrade-171701 kubelet[3336]: E1107 17:20:05.684994    3336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.590811  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:06 kubernetes-upgrade-171701 kubelet[3347]: E1107 17:20:06.431306    3347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.591160  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3358]: E1107 17:20:07.185085    3358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.591504  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.591871  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.592231  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.592578  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.592932  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.593282  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:11 kubernetes-upgrade-171701 kubelet[3559]: E1107 17:20:11.690274    3559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.593650  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:12 kubernetes-upgrade-171701 kubelet[3569]: E1107 17:20:12.434698    3569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.593997  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3582]: E1107 17:20:13.189943    3582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.594450  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3593]: E1107 17:20:13.939588    3593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.594842  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:14 kubernetes-upgrade-171701 kubelet[3604]: E1107 17:20:14.694373    3604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.595190  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:15 kubernetes-upgrade-171701 kubelet[3614]: E1107 17:20:15.435678    3614 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.595543  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3625]: E1107 17:20:16.198070    3625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.595898  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3635]: E1107 17:20:16.931982    3635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.596249  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:17 kubernetes-upgrade-171701 kubelet[3646]: E1107 17:20:17.688072    3646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.596608  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.596982  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.597365  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.597717  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.598070  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.598443  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3845]: E1107 17:20:22.193456    3845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.598811  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3856]: E1107 17:20:22.931800    3856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.599177  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:23 kubernetes-upgrade-171701 kubelet[3867]: E1107 17:20:23.681518    3867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.599531  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:24 kubernetes-upgrade-171701 kubelet[3878]: E1107 17:20:24.429732    3878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.599887  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3889]: E1107 17:20:25.188526    3889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.600232  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3900]: E1107 17:20:25.933166    3900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.600582  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:26 kubernetes-upgrade-171701 kubelet[3912]: E1107 17:20:26.682430    3912 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.600933  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:27 kubernetes-upgrade-171701 kubelet[3924]: E1107 17:20:27.433237    3924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.601282  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3935]: E1107 17:20:28.180959    3935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.601648  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.601995  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.602419  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.602776  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.603120  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.603464  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:32 kubernetes-upgrade-171701 kubelet[4136]: E1107 17:20:32.681227    4136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.603812  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:33 kubernetes-upgrade-171701 kubelet[4147]: E1107 17:20:33.433413    4147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.604156  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4158]: E1107 17:20:34.187859    4158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.604506  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4170]: E1107 17:20:34.932355    4170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.604861  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:35 kubernetes-upgrade-171701 kubelet[4181]: E1107 17:20:35.679599    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.605212  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:36 kubernetes-upgrade-171701 kubelet[4192]: E1107 17:20:36.430712    4192 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.605558  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4203]: E1107 17:20:37.184405    4203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.605911  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4214]: E1107 17:20:37.931528    4214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.606303  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:38 kubernetes-upgrade-171701 kubelet[4225]: E1107 17:20:38.682533    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.606688  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.607041  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.607388  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.607739  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.608086  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:42.608207  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:42.608222  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:42.608331  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:42.608344  209319 out.go:239]   Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.608351  209319 out.go:239]   Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.608358  209319 out.go:239]   Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.608367  209319 out.go:239]   Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:42.608373  209319 out.go:239]   Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:42.608381  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:42.608386  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:20:52.609262  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:20:52.744436  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:20:52.744503  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:20:52.767395  209319 cri.go:87] found id: ""
	I1107 17:20:52.767425  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.767434  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:20:52.767442  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:20:52.767498  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:20:52.789929  209319 cri.go:87] found id: ""
	I1107 17:20:52.789963  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.789972  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:20:52.789980  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:20:52.790042  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:20:52.812421  209319 cri.go:87] found id: ""
	I1107 17:20:52.812458  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.812467  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:20:52.812478  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:20:52.812532  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:20:52.833973  209319 cri.go:87] found id: ""
	I1107 17:20:52.833995  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.834001  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:20:52.834008  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:20:52.834065  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:20:52.856627  209319 cri.go:87] found id: ""
	I1107 17:20:52.856652  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.856658  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:20:52.856665  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:20:52.856715  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:20:52.880156  209319 cri.go:87] found id: ""
	I1107 17:20:52.880182  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.880190  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:20:52.880198  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:20:52.880250  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:20:52.904289  209319 cri.go:87] found id: ""
	I1107 17:20:52.904312  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.904321  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:20:52.904330  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:20:52.904375  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:20:52.932769  209319 cri.go:87] found id: ""
	I1107 17:20:52.932799  209319 logs.go:274] 0 containers: []
	W1107 17:20:52.932809  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:20:52.932823  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:20:52.932837  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:20:52.958454  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:20:52.958484  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:20:52.974584  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:03 kubernetes-upgrade-171701 kubelet[3302]: E1107 17:20:03.435122    3302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.974947  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3313]: E1107 17:20:04.189750    3313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.975296  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:04 kubernetes-upgrade-171701 kubelet[3325]: E1107 17:20:04.930640    3325 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.975641  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:05 kubernetes-upgrade-171701 kubelet[3336]: E1107 17:20:05.684994    3336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.975984  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:06 kubernetes-upgrade-171701 kubelet[3347]: E1107 17:20:06.431306    3347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.976338  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3358]: E1107 17:20:07.185085    3358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.976691  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:07 kubernetes-upgrade-171701 kubelet[3369]: E1107 17:20:07.933941    3369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.977030  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:08 kubernetes-upgrade-171701 kubelet[3380]: E1107 17:20:08.686737    3380 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.977386  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:09 kubernetes-upgrade-171701 kubelet[3390]: E1107 17:20:09.431579    3390 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.977742  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3402]: E1107 17:20:10.187571    3402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.978096  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:10 kubernetes-upgrade-171701 kubelet[3488]: E1107 17:20:10.935744    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.978509  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:11 kubernetes-upgrade-171701 kubelet[3559]: E1107 17:20:11.690274    3559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.978859  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:12 kubernetes-upgrade-171701 kubelet[3569]: E1107 17:20:12.434698    3569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.979211  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3582]: E1107 17:20:13.189943    3582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.979566  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3593]: E1107 17:20:13.939588    3593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.979913  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:14 kubernetes-upgrade-171701 kubelet[3604]: E1107 17:20:14.694373    3604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.980272  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:15 kubernetes-upgrade-171701 kubelet[3614]: E1107 17:20:15.435678    3614 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.980629  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3625]: E1107 17:20:16.198070    3625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.980988  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3635]: E1107 17:20:16.931982    3635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.981339  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:17 kubernetes-upgrade-171701 kubelet[3646]: E1107 17:20:17.688072    3646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.981686  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.982028  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.982423  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.982786  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.983135  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.983482  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3845]: E1107 17:20:22.193456    3845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.983826  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3856]: E1107 17:20:22.931800    3856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.984176  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:23 kubernetes-upgrade-171701 kubelet[3867]: E1107 17:20:23.681518    3867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.984524  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:24 kubernetes-upgrade-171701 kubelet[3878]: E1107 17:20:24.429732    3878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.984873  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3889]: E1107 17:20:25.188526    3889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.985210  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3900]: E1107 17:20:25.933166    3900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.985558  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:26 kubernetes-upgrade-171701 kubelet[3912]: E1107 17:20:26.682430    3912 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.985934  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:27 kubernetes-upgrade-171701 kubelet[3924]: E1107 17:20:27.433237    3924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.986284  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3935]: E1107 17:20:28.180959    3935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.986695  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.987068  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.987417  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.987766  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.988116  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.988464  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:32 kubernetes-upgrade-171701 kubelet[4136]: E1107 17:20:32.681227    4136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.988812  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:33 kubernetes-upgrade-171701 kubelet[4147]: E1107 17:20:33.433413    4147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.989159  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4158]: E1107 17:20:34.187859    4158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.989520  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4170]: E1107 17:20:34.932355    4170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.989882  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:35 kubernetes-upgrade-171701 kubelet[4181]: E1107 17:20:35.679599    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.990253  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:36 kubernetes-upgrade-171701 kubelet[4192]: E1107 17:20:36.430712    4192 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.990642  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4203]: E1107 17:20:37.184405    4203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.990998  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4214]: E1107 17:20:37.931528    4214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.991346  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:38 kubernetes-upgrade-171701 kubelet[4225]: E1107 17:20:38.682533    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.991699  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.992043  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.992391  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.992741  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.993100  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.993444  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4428]: E1107 17:20:43.181074    4428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.993785  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4438]: E1107 17:20:43.934507    4438 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.994139  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:44 kubernetes-upgrade-171701 kubelet[4450]: E1107 17:20:44.681966    4450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.994510  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:45 kubernetes-upgrade-171701 kubelet[4461]: E1107 17:20:45.431434    4461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.994859  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4472]: E1107 17:20:46.180181    4472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.995206  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4483]: E1107 17:20:46.933048    4483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.995552  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:47 kubernetes-upgrade-171701 kubelet[4494]: E1107 17:20:47.681894    4494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.995951  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:48 kubernetes-upgrade-171701 kubelet[4505]: E1107 17:20:48.430200    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.996335  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4517]: E1107 17:20:49.181043    4517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.996798  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.997168  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.997518  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.997868  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:52.998221  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:52.998367  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:20:52.998386  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:20:53.014060  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:20:53.014089  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:20:53.069204  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:20:53.069231  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:20:53.069245  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:20:53.103497  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:53.103532  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:20:53.103644  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:20:53.103657  209319 out.go:239]   Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:53.103663  209319 out.go:239]   Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:53.103667  209319 out.go:239]   Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:53.103672  209319 out.go:239]   Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:20:53.103677  209319 out.go:239]   Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:20:53.103681  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:20:53.103692  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:03.104187  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:03.244122  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:03.244192  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:03.267516  209319 cri.go:87] found id: ""
	I1107 17:21:03.267547  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.267559  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:03.267566  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:03.267617  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:03.289899  209319 cri.go:87] found id: ""
	I1107 17:21:03.289922  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.289931  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:03.289939  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:03.289994  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:03.313250  209319 cri.go:87] found id: ""
	I1107 17:21:03.313276  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.313283  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:03.313291  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:03.313343  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:03.336733  209319 cri.go:87] found id: ""
	I1107 17:21:03.336755  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.336763  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:03.336782  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:03.336832  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:03.358996  209319 cri.go:87] found id: ""
	I1107 17:21:03.359017  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.359024  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:03.359030  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:03.359072  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:03.381490  209319 cri.go:87] found id: ""
	I1107 17:21:03.381510  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.381516  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:03.381522  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:03.381561  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:03.404480  209319 cri.go:87] found id: ""
	I1107 17:21:03.404509  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.404518  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:03.404528  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:03.404589  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:03.431600  209319 cri.go:87] found id: ""
	I1107 17:21:03.431630  209319 logs.go:274] 0 containers: []
	W1107 17:21:03.431637  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:03.431648  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:03.431661  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:03.486493  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:03.486515  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:03.486527  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:03.532610  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:03.532640  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:03.558201  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:03.558229  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:03.576100  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:13 kubernetes-upgrade-171701 kubelet[3593]: E1107 17:20:13.939588    3593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.576463  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:14 kubernetes-upgrade-171701 kubelet[3604]: E1107 17:20:14.694373    3604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.576815  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:15 kubernetes-upgrade-171701 kubelet[3614]: E1107 17:20:15.435678    3614 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.577208  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3625]: E1107 17:20:16.198070    3625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.577573  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:16 kubernetes-upgrade-171701 kubelet[3635]: E1107 17:20:16.931982    3635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.577930  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:17 kubernetes-upgrade-171701 kubelet[3646]: E1107 17:20:17.688072    3646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.578288  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:18 kubernetes-upgrade-171701 kubelet[3656]: E1107 17:20:18.435383    3656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.578687  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3667]: E1107 17:20:19.181252    3667 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.579039  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:19 kubernetes-upgrade-171701 kubelet[3678]: E1107 17:20:19.930702    3678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.579384  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:20 kubernetes-upgrade-171701 kubelet[3688]: E1107 17:20:20.684465    3688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.579731  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:21 kubernetes-upgrade-171701 kubelet[3770]: E1107 17:20:21.441038    3770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.580087  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3845]: E1107 17:20:22.193456    3845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.580437  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:22 kubernetes-upgrade-171701 kubelet[3856]: E1107 17:20:22.931800    3856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.580786  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:23 kubernetes-upgrade-171701 kubelet[3867]: E1107 17:20:23.681518    3867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.581137  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:24 kubernetes-upgrade-171701 kubelet[3878]: E1107 17:20:24.429732    3878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.581489  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3889]: E1107 17:20:25.188526    3889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.581833  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3900]: E1107 17:20:25.933166    3900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.582189  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:26 kubernetes-upgrade-171701 kubelet[3912]: E1107 17:20:26.682430    3912 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.582557  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:27 kubernetes-upgrade-171701 kubelet[3924]: E1107 17:20:27.433237    3924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.582905  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3935]: E1107 17:20:28.180959    3935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.583277  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.583622  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.583963  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.584316  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.584663  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.585004  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:32 kubernetes-upgrade-171701 kubelet[4136]: E1107 17:20:32.681227    4136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.585354  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:33 kubernetes-upgrade-171701 kubelet[4147]: E1107 17:20:33.433413    4147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.585709  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4158]: E1107 17:20:34.187859    4158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.586083  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4170]: E1107 17:20:34.932355    4170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.586453  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:35 kubernetes-upgrade-171701 kubelet[4181]: E1107 17:20:35.679599    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.586803  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:36 kubernetes-upgrade-171701 kubelet[4192]: E1107 17:20:36.430712    4192 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.587159  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4203]: E1107 17:20:37.184405    4203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.587505  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4214]: E1107 17:20:37.931528    4214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.587851  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:38 kubernetes-upgrade-171701 kubelet[4225]: E1107 17:20:38.682533    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.588215  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.588556  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.588901  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.589261  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.589605  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.589950  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4428]: E1107 17:20:43.181074    4428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.590303  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4438]: E1107 17:20:43.934507    4438 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.590684  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:44 kubernetes-upgrade-171701 kubelet[4450]: E1107 17:20:44.681966    4450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.591041  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:45 kubernetes-upgrade-171701 kubelet[4461]: E1107 17:20:45.431434    4461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.591386  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4472]: E1107 17:20:46.180181    4472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.591736  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4483]: E1107 17:20:46.933048    4483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.592113  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:47 kubernetes-upgrade-171701 kubelet[4494]: E1107 17:20:47.681894    4494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.592470  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:48 kubernetes-upgrade-171701 kubelet[4505]: E1107 17:20:48.430200    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.592809  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4517]: E1107 17:20:49.181043    4517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.593158  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.593501  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.593854  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.594200  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.594562  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.594906  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:53 kubernetes-upgrade-171701 kubelet[4719]: E1107 17:20:53.689770    4719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.595255  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:54 kubernetes-upgrade-171701 kubelet[4733]: E1107 17:20:54.442194    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.595600  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4744]: E1107 17:20:55.186832    4744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.595950  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4754]: E1107 17:20:55.931959    4754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.596299  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:56 kubernetes-upgrade-171701 kubelet[4765]: E1107 17:20:56.681513    4765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.596644  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:57 kubernetes-upgrade-171701 kubelet[4776]: E1107 17:20:57.430682    4776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.596998  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4787]: E1107 17:20:58.183221    4787 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.597344  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4798]: E1107 17:20:58.938545    4798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.597716  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:59 kubernetes-upgrade-171701 kubelet[4808]: E1107 17:20:59.681720    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.598071  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.598473  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.598815  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.599168  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.599520  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:03.599636  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:03.599650  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:03.615031  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:03.615055  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:03.615158  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:03.615170  209319 out.go:239]   Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.615175  209319 out.go:239]   Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.615179  209319 out.go:239]   Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.615185  209319 out.go:239]   Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:03.615190  209319 out.go:239]   Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:03.615193  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:03.615198  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:13.616549  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:13.743834  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:13.743933  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:13.766849  209319 cri.go:87] found id: ""
	I1107 17:21:13.766875  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.766881  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:13.766888  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:13.766937  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:13.789779  209319 cri.go:87] found id: ""
	I1107 17:21:13.789806  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.789816  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:13.789827  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:13.789878  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:13.815538  209319 cri.go:87] found id: ""
	I1107 17:21:13.815571  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.815580  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:13.815587  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:13.815632  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:13.838639  209319 cri.go:87] found id: ""
	I1107 17:21:13.838663  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.838670  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:13.838677  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:13.838718  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:13.861437  209319 cri.go:87] found id: ""
	I1107 17:21:13.861465  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.861472  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:13.861478  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:13.861521  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:13.883833  209319 cri.go:87] found id: ""
	I1107 17:21:13.883865  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.883875  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:13.883884  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:13.883935  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:13.908785  209319 cri.go:87] found id: ""
	I1107 17:21:13.908812  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.908821  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:13.908830  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:13.908890  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:13.935047  209319 cri.go:87] found id: ""
	I1107 17:21:13.935072  209319 logs.go:274] 0 containers: []
	W1107 17:21:13.935078  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:13.935089  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:13.935100  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:13.961837  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:13.961866  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:13.979421  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:24 kubernetes-upgrade-171701 kubelet[3878]: E1107 17:20:24.429732    3878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.981183  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3889]: E1107 17:20:25.188526    3889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.981545  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:25 kubernetes-upgrade-171701 kubelet[3900]: E1107 17:20:25.933166    3900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.981897  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:26 kubernetes-upgrade-171701 kubelet[3912]: E1107 17:20:26.682430    3912 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.982252  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:27 kubernetes-upgrade-171701 kubelet[3924]: E1107 17:20:27.433237    3924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.982635  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3935]: E1107 17:20:28.180959    3935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.982999  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:28 kubernetes-upgrade-171701 kubelet[3946]: E1107 17:20:28.932865    3946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.983349  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:29 kubernetes-upgrade-171701 kubelet[3956]: E1107 17:20:29.682505    3956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.983704  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:30 kubernetes-upgrade-171701 kubelet[3967]: E1107 17:20:30.431461    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.984146  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[3978]: E1107 17:20:31.182892    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.984751  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:31 kubernetes-upgrade-171701 kubelet[4067]: E1107 17:20:31.938127    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.985164  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:32 kubernetes-upgrade-171701 kubelet[4136]: E1107 17:20:32.681227    4136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.985515  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:33 kubernetes-upgrade-171701 kubelet[4147]: E1107 17:20:33.433413    4147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.985867  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4158]: E1107 17:20:34.187859    4158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.986215  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4170]: E1107 17:20:34.932355    4170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.986599  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:35 kubernetes-upgrade-171701 kubelet[4181]: E1107 17:20:35.679599    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.986951  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:36 kubernetes-upgrade-171701 kubelet[4192]: E1107 17:20:36.430712    4192 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.987305  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4203]: E1107 17:20:37.184405    4203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.987676  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4214]: E1107 17:20:37.931528    4214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.988030  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:38 kubernetes-upgrade-171701 kubelet[4225]: E1107 17:20:38.682533    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.988384  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.988729  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.989074  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.989452  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.989803  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.990151  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4428]: E1107 17:20:43.181074    4428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.990522  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4438]: E1107 17:20:43.934507    4438 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.990864  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:44 kubernetes-upgrade-171701 kubelet[4450]: E1107 17:20:44.681966    4450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.991214  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:45 kubernetes-upgrade-171701 kubelet[4461]: E1107 17:20:45.431434    4461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.991566  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4472]: E1107 17:20:46.180181    4472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.991914  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4483]: E1107 17:20:46.933048    4483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.992263  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:47 kubernetes-upgrade-171701 kubelet[4494]: E1107 17:20:47.681894    4494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.992627  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:48 kubernetes-upgrade-171701 kubelet[4505]: E1107 17:20:48.430200    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.992973  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4517]: E1107 17:20:49.181043    4517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.993322  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.993666  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.994011  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.994410  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.994777  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.995137  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:53 kubernetes-upgrade-171701 kubelet[4719]: E1107 17:20:53.689770    4719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.995494  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:54 kubernetes-upgrade-171701 kubelet[4733]: E1107 17:20:54.442194    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.995839  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4744]: E1107 17:20:55.186832    4744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.996197  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4754]: E1107 17:20:55.931959    4754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.996572  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:56 kubernetes-upgrade-171701 kubelet[4765]: E1107 17:20:56.681513    4765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.996921  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:57 kubernetes-upgrade-171701 kubelet[4776]: E1107 17:20:57.430682    4776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.997268  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4787]: E1107 17:20:58.183221    4787 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.997642  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4798]: E1107 17:20:58.938545    4798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.998124  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:59 kubernetes-upgrade-171701 kubelet[4808]: E1107 17:20:59.681720    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.998505  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.998953  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.999453  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:13.999886  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.000262  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.000606  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5010]: E1107 17:21:04.180424    5010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.000956  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5021]: E1107 17:21:04.931239    5021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.001334  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:05 kubernetes-upgrade-171701 kubelet[5032]: E1107 17:21:05.682840    5032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.001687  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:06 kubernetes-upgrade-171701 kubelet[5044]: E1107 17:21:06.430655    5044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.002036  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5055]: E1107 17:21:07.181962    5055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.002409  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5065]: E1107 17:21:07.930040    5065 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.002762  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:08 kubernetes-upgrade-171701 kubelet[5076]: E1107 17:21:08.681358    5076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.003116  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:09 kubernetes-upgrade-171701 kubelet[5087]: E1107 17:21:09.430999    5087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.003498  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5098]: E1107 17:21:10.180485    5098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.003842  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.004191  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.004532  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.004889  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.005245  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:14.005365  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:14.005385  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:14.021513  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:14.021541  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:14.075253  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:14.075278  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:14.075289  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:14.110123  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:14.110152  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:14.110276  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:14.110296  209319 out.go:239]   Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.110306  209319 out.go:239]   Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.110332  209319 out.go:239]   Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.110345  209319 out.go:239]   Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:14.110359  209319 out.go:239]   Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:14.110371  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:14.110384  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:24.111828  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:24.244602  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:24.244678  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:24.268965  209319 cri.go:87] found id: ""
	I1107 17:21:24.269004  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.269013  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:24.269025  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:24.269084  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:24.292275  209319 cri.go:87] found id: ""
	I1107 17:21:24.292299  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.292305  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:24.292311  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:24.292360  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:24.314807  209319 cri.go:87] found id: ""
	I1107 17:21:24.314837  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.314844  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:24.314850  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:24.314898  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:24.340797  209319 cri.go:87] found id: ""
	I1107 17:21:24.340822  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.340829  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:24.340835  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:24.340874  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:24.365165  209319 cri.go:87] found id: ""
	I1107 17:21:24.365190  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.365198  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:24.365208  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:24.365245  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:24.390083  209319 cri.go:87] found id: ""
	I1107 17:21:24.390109  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.390117  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:24.390126  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:24.390182  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:24.417523  209319 cri.go:87] found id: ""
	I1107 17:21:24.417553  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.417563  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:24.417572  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:24.417624  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:24.444039  209319 cri.go:87] found id: ""
	I1107 17:21:24.444060  209319 logs.go:274] 0 containers: []
	W1107 17:21:24.444066  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:24.444076  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:24.444088  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:24.466464  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:34 kubernetes-upgrade-171701 kubelet[4170]: E1107 17:20:34.932355    4170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.467074  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:35 kubernetes-upgrade-171701 kubelet[4181]: E1107 17:20:35.679599    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.467656  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:36 kubernetes-upgrade-171701 kubelet[4192]: E1107 17:20:36.430712    4192 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.468249  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4203]: E1107 17:20:37.184405    4203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.468602  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:37 kubernetes-upgrade-171701 kubelet[4214]: E1107 17:20:37.931528    4214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.468958  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:38 kubernetes-upgrade-171701 kubelet[4225]: E1107 17:20:38.682533    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.469309  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:39 kubernetes-upgrade-171701 kubelet[4236]: E1107 17:20:39.431612    4236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.469656  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4247]: E1107 17:20:40.184447    4247 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.469997  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:40 kubernetes-upgrade-171701 kubelet[4258]: E1107 17:20:40.935084    4258 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.470486  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:41 kubernetes-upgrade-171701 kubelet[4270]: E1107 17:20:41.681296    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.470848  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:42 kubernetes-upgrade-171701 kubelet[4360]: E1107 17:20:42.439072    4360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.471206  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4428]: E1107 17:20:43.181074    4428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.471550  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:43 kubernetes-upgrade-171701 kubelet[4438]: E1107 17:20:43.934507    4438 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.471894  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:44 kubernetes-upgrade-171701 kubelet[4450]: E1107 17:20:44.681966    4450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.472242  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:45 kubernetes-upgrade-171701 kubelet[4461]: E1107 17:20:45.431434    4461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.472587  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4472]: E1107 17:20:46.180181    4472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.472926  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4483]: E1107 17:20:46.933048    4483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.473278  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:47 kubernetes-upgrade-171701 kubelet[4494]: E1107 17:20:47.681894    4494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.473733  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:48 kubernetes-upgrade-171701 kubelet[4505]: E1107 17:20:48.430200    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.474274  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4517]: E1107 17:20:49.181043    4517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.474798  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.475426  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.475788  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.476140  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.476490  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.476838  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:53 kubernetes-upgrade-171701 kubelet[4719]: E1107 17:20:53.689770    4719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.477189  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:54 kubernetes-upgrade-171701 kubelet[4733]: E1107 17:20:54.442194    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.477541  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4744]: E1107 17:20:55.186832    4744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.478093  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4754]: E1107 17:20:55.931959    4754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.478690  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:56 kubernetes-upgrade-171701 kubelet[4765]: E1107 17:20:56.681513    4765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.479057  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:57 kubernetes-upgrade-171701 kubelet[4776]: E1107 17:20:57.430682    4776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.479408  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4787]: E1107 17:20:58.183221    4787 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.479759  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4798]: E1107 17:20:58.938545    4798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.480107  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:59 kubernetes-upgrade-171701 kubelet[4808]: E1107 17:20:59.681720    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.480450  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.480798  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.481154  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.481504  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.481847  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.482201  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5010]: E1107 17:21:04.180424    5010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.482579  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5021]: E1107 17:21:04.931239    5021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.482922  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:05 kubernetes-upgrade-171701 kubelet[5032]: E1107 17:21:05.682840    5032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.483270  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:06 kubernetes-upgrade-171701 kubelet[5044]: E1107 17:21:06.430655    5044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.483673  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5055]: E1107 17:21:07.181962    5055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.484033  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5065]: E1107 17:21:07.930040    5065 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.484371  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:08 kubernetes-upgrade-171701 kubelet[5076]: E1107 17:21:08.681358    5076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.484717  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:09 kubernetes-upgrade-171701 kubelet[5087]: E1107 17:21:09.430999    5087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.485061  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5098]: E1107 17:21:10.180485    5098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.485409  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.485758  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.486104  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.486479  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.486829  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.487178  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:14 kubernetes-upgrade-171701 kubelet[5299]: E1107 17:21:14.681364    5299 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.487526  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:15 kubernetes-upgrade-171701 kubelet[5310]: E1107 17:21:15.431810    5310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.487867  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5321]: E1107 17:21:16.181625    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.488225  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5332]: E1107 17:21:16.939689    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.488576  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:17 kubernetes-upgrade-171701 kubelet[5342]: E1107 17:21:17.680886    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.488923  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:18 kubernetes-upgrade-171701 kubelet[5352]: E1107 17:21:18.436211    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.489285  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5363]: E1107 17:21:19.191559    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.489690  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5374]: E1107 17:21:19.950751    5374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.490326  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:20 kubernetes-upgrade-171701 kubelet[5385]: E1107 17:21:20.683847    5385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.490862  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.491225  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.491578  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.491928  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.492279  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:24.492397  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:24.492415  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:24.509344  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:24.509370  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:24.572767  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:24.572791  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:24.572803  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:24.608603  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:24.608641  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:24.637399  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:24.637428  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:24.637562  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:24.637577  209319 out.go:239]   Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.637584  209319 out.go:239]   Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.637592  209319 out.go:239]   Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.637603  209319 out.go:239]   Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:24.637611  209319 out.go:239]   Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:24.637620  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:24.637627  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:34.638390  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:34.744380  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:34.744473  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:34.769547  209319 cri.go:87] found id: ""
	I1107 17:21:34.769571  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.769577  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:34.769583  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:34.769622  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:34.792270  209319 cri.go:87] found id: ""
	I1107 17:21:34.792292  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.792298  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:34.792304  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:34.792346  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:34.815397  209319 cri.go:87] found id: ""
	I1107 17:21:34.815428  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.815438  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:34.815447  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:34.815501  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:34.838563  209319 cri.go:87] found id: ""
	I1107 17:21:34.838591  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.838599  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:34.838608  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:34.838662  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:34.860773  209319 cri.go:87] found id: ""
	I1107 17:21:34.860803  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.860813  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:34.860821  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:34.860865  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:34.883336  209319 cri.go:87] found id: ""
	I1107 17:21:34.883367  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.883375  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:34.883381  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:34.883436  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:34.907649  209319 cri.go:87] found id: ""
	I1107 17:21:34.907679  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.907687  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:34.907695  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:34.907749  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:34.931994  209319 cri.go:87] found id: ""
	I1107 17:21:34.932030  209319 logs.go:274] 0 containers: []
	W1107 17:21:34.932037  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:34.932047  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:34.932059  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:34.987542  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:34.987565  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:34.987575  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:35.035387  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:35.035424  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:35.062825  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:35.062852  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:35.078854  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:45 kubernetes-upgrade-171701 kubelet[4461]: E1107 17:20:45.431434    4461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.079461  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4472]: E1107 17:20:46.180181    4472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.079834  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:46 kubernetes-upgrade-171701 kubelet[4483]: E1107 17:20:46.933048    4483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.080176  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:47 kubernetes-upgrade-171701 kubelet[4494]: E1107 17:20:47.681894    4494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.080528  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:48 kubernetes-upgrade-171701 kubelet[4505]: E1107 17:20:48.430200    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.080888  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4517]: E1107 17:20:49.181043    4517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.081286  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:49 kubernetes-upgrade-171701 kubelet[4528]: E1107 17:20:49.935310    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.081845  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:50 kubernetes-upgrade-171701 kubelet[4539]: E1107 17:20:50.682689    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.082482  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:51 kubernetes-upgrade-171701 kubelet[4550]: E1107 17:20:51.431212    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.082979  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4561]: E1107 17:20:52.182976    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.083384  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:52 kubernetes-upgrade-171701 kubelet[4650]: E1107 17:20:52.939382    4650 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.083746  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:53 kubernetes-upgrade-171701 kubelet[4719]: E1107 17:20:53.689770    4719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.084096  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:54 kubernetes-upgrade-171701 kubelet[4733]: E1107 17:20:54.442194    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.084455  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4744]: E1107 17:20:55.186832    4744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.084808  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4754]: E1107 17:20:55.931959    4754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.085152  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:56 kubernetes-upgrade-171701 kubelet[4765]: E1107 17:20:56.681513    4765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.085490  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:57 kubernetes-upgrade-171701 kubelet[4776]: E1107 17:20:57.430682    4776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.085848  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4787]: E1107 17:20:58.183221    4787 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.086202  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4798]: E1107 17:20:58.938545    4798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.086576  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:59 kubernetes-upgrade-171701 kubelet[4808]: E1107 17:20:59.681720    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.086932  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.087275  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.087623  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.087975  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.088336  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.088695  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5010]: E1107 17:21:04.180424    5010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.089036  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5021]: E1107 17:21:04.931239    5021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.089393  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:05 kubernetes-upgrade-171701 kubelet[5032]: E1107 17:21:05.682840    5032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.089742  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:06 kubernetes-upgrade-171701 kubelet[5044]: E1107 17:21:06.430655    5044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.090082  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5055]: E1107 17:21:07.181962    5055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.090462  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5065]: E1107 17:21:07.930040    5065 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.090813  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:08 kubernetes-upgrade-171701 kubelet[5076]: E1107 17:21:08.681358    5076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.091154  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:09 kubernetes-upgrade-171701 kubelet[5087]: E1107 17:21:09.430999    5087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.091527  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5098]: E1107 17:21:10.180485    5098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.091880  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.092280  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.092699  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.093063  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.093437  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.093792  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:14 kubernetes-upgrade-171701 kubelet[5299]: E1107 17:21:14.681364    5299 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.094140  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:15 kubernetes-upgrade-171701 kubelet[5310]: E1107 17:21:15.431810    5310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.094513  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5321]: E1107 17:21:16.181625    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.094879  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5332]: E1107 17:21:16.939689    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.095221  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:17 kubernetes-upgrade-171701 kubelet[5342]: E1107 17:21:17.680886    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.095563  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:18 kubernetes-upgrade-171701 kubelet[5352]: E1107 17:21:18.436211    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.095916  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5363]: E1107 17:21:19.191559    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.096258  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5374]: E1107 17:21:19.950751    5374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.096596  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:20 kubernetes-upgrade-171701 kubelet[5385]: E1107 17:21:20.683847    5385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.096944  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.097297  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.097645  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.097991  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.098361  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.098708  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5588]: E1107 17:21:25.181880    5588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.099057  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5598]: E1107 17:21:25.932876    5598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.099407  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:26 kubernetes-upgrade-171701 kubelet[5610]: E1107 17:21:26.687004    5610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.099763  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:27 kubernetes-upgrade-171701 kubelet[5621]: E1107 17:21:27.433534    5621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.100102  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5632]: E1107 17:21:28.183568    5632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.100445  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5644]: E1107 17:21:28.935319    5644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.100815  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:29 kubernetes-upgrade-171701 kubelet[5656]: E1107 17:21:29.685212    5656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.101163  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:30 kubernetes-upgrade-171701 kubelet[5668]: E1107 17:21:30.430371    5668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.101509  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5679]: E1107 17:21:31.181477    5679 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.101904  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.102275  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.102642  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.103002  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.103353  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:35.103487  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:35.103503  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:35.119311  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:35.119337  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:35.119480  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:35.119498  209319 out.go:239]   Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.119507  209319 out.go:239]   Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.119515  209319 out.go:239]   Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.119527  209319 out.go:239]   Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:35.119539  209319 out.go:239]   Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:35.119545  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:35.119557  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:45.121037  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:45.243845  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:45.243914  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:45.268069  209319 cri.go:87] found id: ""
	I1107 17:21:45.268093  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.268102  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:45.268109  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:45.268159  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:45.291575  209319 cri.go:87] found id: ""
	I1107 17:21:45.291616  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.291626  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:45.291635  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:45.291681  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:45.314087  209319 cri.go:87] found id: ""
	I1107 17:21:45.314121  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.314127  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:45.314133  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:45.314184  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:45.337135  209319 cri.go:87] found id: ""
	I1107 17:21:45.337157  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.337163  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:45.337170  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:45.337216  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:45.360041  209319 cri.go:87] found id: ""
	I1107 17:21:45.360064  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.360071  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:45.360077  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:45.360120  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:45.384084  209319 cri.go:87] found id: ""
	I1107 17:21:45.384113  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.384121  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:45.384130  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:45.384185  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:45.412262  209319 cri.go:87] found id: ""
	I1107 17:21:45.412291  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.412299  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:45.412307  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:45.412360  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:45.438688  209319 cri.go:87] found id: ""
	I1107 17:21:45.438715  209319 logs.go:274] 0 containers: []
	W1107 17:21:45.438722  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:45.438731  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:45.438743  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:45.455965  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:55 kubernetes-upgrade-171701 kubelet[4754]: E1107 17:20:55.931959    4754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.456337  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:56 kubernetes-upgrade-171701 kubelet[4765]: E1107 17:20:56.681513    4765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.456686  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:57 kubernetes-upgrade-171701 kubelet[4776]: E1107 17:20:57.430682    4776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.457033  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4787]: E1107 17:20:58.183221    4787 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.457376  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:58 kubernetes-upgrade-171701 kubelet[4798]: E1107 17:20:58.938545    4798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.457716  209319 logs.go:138] Found kubelet problem: Nov 07 17:20:59 kubernetes-upgrade-171701 kubelet[4808]: E1107 17:20:59.681720    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.458058  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:00 kubernetes-upgrade-171701 kubelet[4819]: E1107 17:21:00.431584    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.458444  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4830]: E1107 17:21:01.180783    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.458822  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:01 kubernetes-upgrade-171701 kubelet[4841]: E1107 17:21:01.929879    4841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.459188  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:02 kubernetes-upgrade-171701 kubelet[4852]: E1107 17:21:02.681218    4852 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.459537  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:03 kubernetes-upgrade-171701 kubelet[4941]: E1107 17:21:03.437154    4941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.459891  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5010]: E1107 17:21:04.180424    5010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.460255  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:04 kubernetes-upgrade-171701 kubelet[5021]: E1107 17:21:04.931239    5021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.460608  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:05 kubernetes-upgrade-171701 kubelet[5032]: E1107 17:21:05.682840    5032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.460976  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:06 kubernetes-upgrade-171701 kubelet[5044]: E1107 17:21:06.430655    5044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.461350  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5055]: E1107 17:21:07.181962    5055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.461696  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5065]: E1107 17:21:07.930040    5065 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.462044  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:08 kubernetes-upgrade-171701 kubelet[5076]: E1107 17:21:08.681358    5076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.462419  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:09 kubernetes-upgrade-171701 kubelet[5087]: E1107 17:21:09.430999    5087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.462764  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5098]: E1107 17:21:10.180485    5098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.463117  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.463458  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.463806  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.464149  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.464495  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.464832  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:14 kubernetes-upgrade-171701 kubelet[5299]: E1107 17:21:14.681364    5299 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.465193  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:15 kubernetes-upgrade-171701 kubelet[5310]: E1107 17:21:15.431810    5310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.465539  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5321]: E1107 17:21:16.181625    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.465909  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5332]: E1107 17:21:16.939689    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.466265  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:17 kubernetes-upgrade-171701 kubelet[5342]: E1107 17:21:17.680886    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.466744  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:18 kubernetes-upgrade-171701 kubelet[5352]: E1107 17:21:18.436211    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.467132  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5363]: E1107 17:21:19.191559    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.467471  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5374]: E1107 17:21:19.950751    5374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.467823  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:20 kubernetes-upgrade-171701 kubelet[5385]: E1107 17:21:20.683847    5385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.468169  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.468522  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.468862  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.469216  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.469564  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.469913  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5588]: E1107 17:21:25.181880    5588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.470278  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5598]: E1107 17:21:25.932876    5598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.470649  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:26 kubernetes-upgrade-171701 kubelet[5610]: E1107 17:21:26.687004    5610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.470993  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:27 kubernetes-upgrade-171701 kubelet[5621]: E1107 17:21:27.433534    5621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.471337  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5632]: E1107 17:21:28.183568    5632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.471698  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5644]: E1107 17:21:28.935319    5644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.472047  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:29 kubernetes-upgrade-171701 kubelet[5656]: E1107 17:21:29.685212    5656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.472393  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:30 kubernetes-upgrade-171701 kubelet[5668]: E1107 17:21:30.430371    5668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.472732  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5679]: E1107 17:21:31.181477    5679 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.473092  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.473440  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.473788  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.474157  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.474522  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.474863  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:35 kubernetes-upgrade-171701 kubelet[5881]: E1107 17:21:35.680212    5881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.475211  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:36 kubernetes-upgrade-171701 kubelet[5892]: E1107 17:21:36.432454    5892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.475552  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5903]: E1107 17:21:37.183320    5903 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.475893  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5914]: E1107 17:21:37.933921    5914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.476239  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:38 kubernetes-upgrade-171701 kubelet[5925]: E1107 17:21:38.681885    5925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.476587  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:39 kubernetes-upgrade-171701 kubelet[5936]: E1107 17:21:39.432473    5936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.476937  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5948]: E1107 17:21:40.182154    5948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.477288  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5959]: E1107 17:21:40.932501    5959 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.477633  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:41 kubernetes-upgrade-171701 kubelet[5970]: E1107 17:21:41.681766    5970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.477971  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.478367  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.478729  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.479075  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.479433  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:45.479550  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:45.479567  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:45.496085  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:45.496113  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:45.551111  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:45.551134  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:45.551144  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:45.588046  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:45.588078  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:45.613687  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:45.613722  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:45.613869  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:45.613890  209319 out.go:239]   Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.613898  209319 out.go:239]   Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.613904  209319 out.go:239]   Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.613912  209319 out.go:239]   Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:45.613920  209319 out.go:239]   Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:45.613927  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:45.613947  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:21:55.615013  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:21:55.744382  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:21:55.744472  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:21:55.768133  209319 cri.go:87] found id: ""
	I1107 17:21:55.768159  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.768166  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:21:55.768173  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:21:55.768220  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:21:55.790342  209319 cri.go:87] found id: ""
	I1107 17:21:55.790369  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.790374  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:21:55.790381  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:21:55.790425  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:21:55.812185  209319 cri.go:87] found id: ""
	I1107 17:21:55.812213  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.812221  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:21:55.812229  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:21:55.812278  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:21:55.833903  209319 cri.go:87] found id: ""
	I1107 17:21:55.833928  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.833936  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:21:55.833944  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:21:55.834000  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:21:55.855766  209319 cri.go:87] found id: ""
	I1107 17:21:55.855795  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.855803  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:21:55.855810  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:21:55.855851  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:21:55.878790  209319 cri.go:87] found id: ""
	I1107 17:21:55.878817  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.878823  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:21:55.878829  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:21:55.878878  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:21:55.902330  209319 cri.go:87] found id: ""
	I1107 17:21:55.902358  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.902367  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:21:55.902375  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:21:55.902430  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:21:55.927775  209319 cri.go:87] found id: ""
	I1107 17:21:55.927800  209319 logs.go:274] 0 containers: []
	W1107 17:21:55.927808  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:21:55.927821  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:21:55.927835  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:21:55.953830  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:21:55.953861  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:21:55.968704  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:06 kubernetes-upgrade-171701 kubelet[5044]: E1107 17:21:06.430655    5044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.969086  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5055]: E1107 17:21:07.181962    5055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.969462  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:07 kubernetes-upgrade-171701 kubelet[5065]: E1107 17:21:07.930040    5065 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.969917  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:08 kubernetes-upgrade-171701 kubelet[5076]: E1107 17:21:08.681358    5076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.970297  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:09 kubernetes-upgrade-171701 kubelet[5087]: E1107 17:21:09.430999    5087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.970686  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5098]: E1107 17:21:10.180485    5098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.971067  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:10 kubernetes-upgrade-171701 kubelet[5109]: E1107 17:21:10.932682    5109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.971432  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:11 kubernetes-upgrade-171701 kubelet[5119]: E1107 17:21:11.682163    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.971796  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:12 kubernetes-upgrade-171701 kubelet[5130]: E1107 17:21:12.432662    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.972150  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5140]: E1107 17:21:13.182291    5140 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.972518  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:13 kubernetes-upgrade-171701 kubelet[5229]: E1107 17:21:13.936214    5229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.972881  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:14 kubernetes-upgrade-171701 kubelet[5299]: E1107 17:21:14.681364    5299 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.973232  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:15 kubernetes-upgrade-171701 kubelet[5310]: E1107 17:21:15.431810    5310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.973597  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5321]: E1107 17:21:16.181625    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.973949  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5332]: E1107 17:21:16.939689    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.974357  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:17 kubernetes-upgrade-171701 kubelet[5342]: E1107 17:21:17.680886    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.974731  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:18 kubernetes-upgrade-171701 kubelet[5352]: E1107 17:21:18.436211    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.975086  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5363]: E1107 17:21:19.191559    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.975452  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5374]: E1107 17:21:19.950751    5374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.975817  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:20 kubernetes-upgrade-171701 kubelet[5385]: E1107 17:21:20.683847    5385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.976178  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.976544  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.976903  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.977262  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.977628  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.977992  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5588]: E1107 17:21:25.181880    5588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.978378  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5598]: E1107 17:21:25.932876    5598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.978753  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:26 kubernetes-upgrade-171701 kubelet[5610]: E1107 17:21:26.687004    5610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.979111  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:27 kubernetes-upgrade-171701 kubelet[5621]: E1107 17:21:27.433534    5621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.979473  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5632]: E1107 17:21:28.183568    5632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.979835  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5644]: E1107 17:21:28.935319    5644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.980201  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:29 kubernetes-upgrade-171701 kubelet[5656]: E1107 17:21:29.685212    5656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.980560  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:30 kubernetes-upgrade-171701 kubelet[5668]: E1107 17:21:30.430371    5668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.980935  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5679]: E1107 17:21:31.181477    5679 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.981297  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.981661  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.982015  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.982393  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.982760  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.983122  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:35 kubernetes-upgrade-171701 kubelet[5881]: E1107 17:21:35.680212    5881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.983495  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:36 kubernetes-upgrade-171701 kubelet[5892]: E1107 17:21:36.432454    5892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.983875  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5903]: E1107 17:21:37.183320    5903 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.984233  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5914]: E1107 17:21:37.933921    5914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.984597  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:38 kubernetes-upgrade-171701 kubelet[5925]: E1107 17:21:38.681885    5925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.984988  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:39 kubernetes-upgrade-171701 kubelet[5936]: E1107 17:21:39.432473    5936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.985351  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5948]: E1107 17:21:40.182154    5948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.985724  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5959]: E1107 17:21:40.932501    5959 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.986100  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:41 kubernetes-upgrade-171701 kubelet[5970]: E1107 17:21:41.681766    5970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.986549  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.986909  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.987263  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.987622  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.987964  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.988310  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6175]: E1107 17:21:46.181839    6175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.988663  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6186]: E1107 17:21:46.932298    6186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.989018  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:47 kubernetes-upgrade-171701 kubelet[6197]: E1107 17:21:47.680590    6197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.989379  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:48 kubernetes-upgrade-171701 kubelet[6209]: E1107 17:21:48.431211    6209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.989727  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6220]: E1107 17:21:49.180624    6220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.990074  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6231]: E1107 17:21:49.932522    6231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.990438  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:50 kubernetes-upgrade-171701 kubelet[6243]: E1107 17:21:50.680257    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.990788  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:51 kubernetes-upgrade-171701 kubelet[6254]: E1107 17:21:51.435278    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.991133  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6265]: E1107 17:21:52.186539    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.991484  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.991835  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.992191  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.992532  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:55.992886  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:55.993002  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:21:55.993023  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:21:56.009929  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:21:56.009961  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:21:56.062660  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:21:56.062681  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:21:56.062693  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:21:56.098357  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:56.098385  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:21:56.098500  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:21:56.098512  209319 out.go:239]   Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:56.098518  209319 out.go:239]   Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:56.098522  209319 out.go:239]   Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:56.098527  209319 out.go:239]   Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:21:56.098533  209319 out.go:239]   Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:21:56.098540  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:21:56.098545  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:22:06.099583  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:22:06.244695  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:22:06.244765  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:22:06.269970  209319 cri.go:87] found id: ""
	I1107 17:22:06.269998  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.270004  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:22:06.270016  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:22:06.270067  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:22:06.294234  209319 cri.go:87] found id: ""
	I1107 17:22:06.294266  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.294274  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:22:06.294280  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:22:06.294338  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:22:06.317657  209319 cri.go:87] found id: ""
	I1107 17:22:06.317687  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.317693  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:22:06.317700  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:22:06.317750  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:22:06.340157  209319 cri.go:87] found id: ""
	I1107 17:22:06.340186  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.340195  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:22:06.340204  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:22:06.340255  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:22:06.362457  209319 cri.go:87] found id: ""
	I1107 17:22:06.362486  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.362495  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:22:06.362503  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:22:06.362551  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:22:06.385684  209319 cri.go:87] found id: ""
	I1107 17:22:06.385708  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.385715  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:22:06.385721  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:22:06.385781  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:22:06.412984  209319 cri.go:87] found id: ""
	I1107 17:22:06.413007  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.413013  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:22:06.413020  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:22:06.413072  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:22:06.437931  209319 cri.go:87] found id: ""
	I1107 17:22:06.437961  209319 logs.go:274] 0 containers: []
	W1107 17:22:06.437971  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:22:06.437984  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:22:06.437999  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:22:06.474855  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:22:06.474901  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:22:06.501277  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:22:06.501310  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:22:06.517192  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:16 kubernetes-upgrade-171701 kubelet[5332]: E1107 17:21:16.939689    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.517555  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:17 kubernetes-upgrade-171701 kubelet[5342]: E1107 17:21:17.680886    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.517897  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:18 kubernetes-upgrade-171701 kubelet[5352]: E1107 17:21:18.436211    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.518247  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5363]: E1107 17:21:19.191559    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.518616  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:19 kubernetes-upgrade-171701 kubelet[5374]: E1107 17:21:19.950751    5374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.518970  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:20 kubernetes-upgrade-171701 kubelet[5385]: E1107 17:21:20.683847    5385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.519319  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:21 kubernetes-upgrade-171701 kubelet[5395]: E1107 17:21:21.438919    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.519663  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5405]: E1107 17:21:22.185038    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.520004  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:22 kubernetes-upgrade-171701 kubelet[5417]: E1107 17:21:22.933748    5417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.520350  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:23 kubernetes-upgrade-171701 kubelet[5428]: E1107 17:21:23.685540    5428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.520700  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:24 kubernetes-upgrade-171701 kubelet[5518]: E1107 17:21:24.440172    5518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.521071  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5588]: E1107 17:21:25.181880    5588 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.521420  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:25 kubernetes-upgrade-171701 kubelet[5598]: E1107 17:21:25.932876    5598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.521768  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:26 kubernetes-upgrade-171701 kubelet[5610]: E1107 17:21:26.687004    5610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.522204  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:27 kubernetes-upgrade-171701 kubelet[5621]: E1107 17:21:27.433534    5621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.522614  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5632]: E1107 17:21:28.183568    5632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.522987  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5644]: E1107 17:21:28.935319    5644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.523366  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:29 kubernetes-upgrade-171701 kubelet[5656]: E1107 17:21:29.685212    5656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.523734  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:30 kubernetes-upgrade-171701 kubelet[5668]: E1107 17:21:30.430371    5668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.524113  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5679]: E1107 17:21:31.181477    5679 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.524489  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.524857  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.525227  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.525618  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.526026  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.526432  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:35 kubernetes-upgrade-171701 kubelet[5881]: E1107 17:21:35.680212    5881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.526881  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:36 kubernetes-upgrade-171701 kubelet[5892]: E1107 17:21:36.432454    5892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.527293  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5903]: E1107 17:21:37.183320    5903 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.527690  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5914]: E1107 17:21:37.933921    5914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.528097  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:38 kubernetes-upgrade-171701 kubelet[5925]: E1107 17:21:38.681885    5925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.528465  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:39 kubernetes-upgrade-171701 kubelet[5936]: E1107 17:21:39.432473    5936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.528921  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5948]: E1107 17:21:40.182154    5948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.529377  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5959]: E1107 17:21:40.932501    5959 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.529769  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:41 kubernetes-upgrade-171701 kubelet[5970]: E1107 17:21:41.681766    5970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.530145  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.530552  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.530931  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.531309  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.531673  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.532055  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6175]: E1107 17:21:46.181839    6175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.532422  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6186]: E1107 17:21:46.932298    6186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.532789  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:47 kubernetes-upgrade-171701 kubelet[6197]: E1107 17:21:47.680590    6197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.533159  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:48 kubernetes-upgrade-171701 kubelet[6209]: E1107 17:21:48.431211    6209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.533536  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6220]: E1107 17:21:49.180624    6220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.533909  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6231]: E1107 17:21:49.932522    6231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.534276  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:50 kubernetes-upgrade-171701 kubelet[6243]: E1107 17:21:50.680257    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.534663  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:51 kubernetes-upgrade-171701 kubelet[6254]: E1107 17:21:51.435278    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.535034  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6265]: E1107 17:21:52.186539    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.535403  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.535775  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.536156  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.536530  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.536907  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.537285  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:56 kubernetes-upgrade-171701 kubelet[6468]: E1107 17:21:56.681927    6468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.537665  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:57 kubernetes-upgrade-171701 kubelet[6480]: E1107 17:21:57.432096    6480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.538046  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6491]: E1107 17:21:58.181940    6491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.538444  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6502]: E1107 17:21:58.931585    6502 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.538855  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:59 kubernetes-upgrade-171701 kubelet[6514]: E1107 17:21:59.684165    6514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.539233  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:00 kubernetes-upgrade-171701 kubelet[6526]: E1107 17:22:00.432499    6526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.539608  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6537]: E1107 17:22:01.183357    6537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.539977  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6549]: E1107 17:22:01.932810    6549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.540386  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:02 kubernetes-upgrade-171701 kubelet[6560]: E1107 17:22:02.684840    6560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.540780  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:03 kubernetes-upgrade-171701 kubelet[6571]: E1107 17:22:03.439682    6571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.541197  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6582]: E1107 17:22:04.182110    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.541569  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6593]: E1107 17:22:04.933136    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.541945  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:05 kubernetes-upgrade-171701 kubelet[6604]: E1107 17:22:05.681006    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.542353  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:06 kubernetes-upgrade-171701 kubelet[6691]: E1107 17:22:06.440696    6691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:06.542507  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:22:06.542526  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:22:06.558691  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:22:06.558720  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:22:06.613827  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:22:06.613862  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:06.613891  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:22:06.614061  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:22:06.614082  209319 out.go:239]   Nov 07 17:22:03 kubernetes-upgrade-171701 kubelet[6571]: E1107 17:22:03.439682    6571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:03 kubernetes-upgrade-171701 kubelet[6571]: E1107 17:22:03.439682    6571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.614089  209319 out.go:239]   Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6582]: E1107 17:22:04.182110    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6582]: E1107 17:22:04.182110    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.614094  209319 out.go:239]   Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6593]: E1107 17:22:04.933136    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6593]: E1107 17:22:04.933136    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.614101  209319 out.go:239]   Nov 07 17:22:05 kubernetes-upgrade-171701 kubelet[6604]: E1107 17:22:05.681006    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:05 kubernetes-upgrade-171701 kubelet[6604]: E1107 17:22:05.681006    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:06.614111  209319 out.go:239]   Nov 07 17:22:06 kubernetes-upgrade-171701 kubelet[6691]: E1107 17:22:06.440696    6691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:06 kubernetes-upgrade-171701 kubelet[6691]: E1107 17:22:06.440696    6691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:06.614125  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:06.614140  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:22:16.615846  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:22:16.744434  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:22:16.744505  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:22:16.776281  209319 cri.go:87] found id: ""
	I1107 17:22:16.776310  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.776318  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:22:16.776327  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:22:16.776380  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:22:16.811254  209319 cri.go:87] found id: ""
	I1107 17:22:16.811285  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.811390  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:22:16.811410  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:22:16.811470  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:22:16.839123  209319 cri.go:87] found id: ""
	I1107 17:22:16.839158  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.839165  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:22:16.839171  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:22:16.839211  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:22:16.867665  209319 cri.go:87] found id: ""
	I1107 17:22:16.867697  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.867707  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:22:16.867716  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:22:16.867763  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:22:16.903322  209319 cri.go:87] found id: ""
	I1107 17:22:16.903353  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.903361  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:22:16.903369  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:22:16.903423  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:22:16.937358  209319 cri.go:87] found id: ""
	I1107 17:22:16.937383  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.937393  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:22:16.937402  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:22:16.937456  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:22:16.966948  209319 cri.go:87] found id: ""
	I1107 17:22:16.966978  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.966986  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:22:16.966995  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:22:16.967047  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:22:16.993864  209319 cri.go:87] found id: ""
	I1107 17:22:16.993893  209319 logs.go:274] 0 containers: []
	W1107 17:22:16.993901  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:22:16.993914  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:22:16.993928  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:22:17.013732  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:27 kubernetes-upgrade-171701 kubelet[5621]: E1107 17:21:27.433534    5621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.014280  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5632]: E1107 17:21:28.183568    5632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.014744  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:28 kubernetes-upgrade-171701 kubelet[5644]: E1107 17:21:28.935319    5644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.015144  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:29 kubernetes-upgrade-171701 kubelet[5656]: E1107 17:21:29.685212    5656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.015530  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:30 kubernetes-upgrade-171701 kubelet[5668]: E1107 17:21:30.430371    5668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.015878  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5679]: E1107 17:21:31.181477    5679 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.016230  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:31 kubernetes-upgrade-171701 kubelet[5691]: E1107 17:21:31.935381    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.016592  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:32 kubernetes-upgrade-171701 kubelet[5702]: E1107 17:21:32.687329    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.016936  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:33 kubernetes-upgrade-171701 kubelet[5713]: E1107 17:21:33.437705    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.017296  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5723]: E1107 17:21:34.182195    5723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.017667  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:34 kubernetes-upgrade-171701 kubelet[5813]: E1107 17:21:34.936169    5813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.018075  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:35 kubernetes-upgrade-171701 kubelet[5881]: E1107 17:21:35.680212    5881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.018518  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:36 kubernetes-upgrade-171701 kubelet[5892]: E1107 17:21:36.432454    5892 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.018875  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5903]: E1107 17:21:37.183320    5903 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.019225  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5914]: E1107 17:21:37.933921    5914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.019567  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:38 kubernetes-upgrade-171701 kubelet[5925]: E1107 17:21:38.681885    5925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.019924  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:39 kubernetes-upgrade-171701 kubelet[5936]: E1107 17:21:39.432473    5936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.020287  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5948]: E1107 17:21:40.182154    5948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.020638  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5959]: E1107 17:21:40.932501    5959 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.020981  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:41 kubernetes-upgrade-171701 kubelet[5970]: E1107 17:21:41.681766    5970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.021349  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.021718  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.022072  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.022482  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.022889  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.023239  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6175]: E1107 17:21:46.181839    6175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.023589  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6186]: E1107 17:21:46.932298    6186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.023931  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:47 kubernetes-upgrade-171701 kubelet[6197]: E1107 17:21:47.680590    6197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.024284  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:48 kubernetes-upgrade-171701 kubelet[6209]: E1107 17:21:48.431211    6209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.024623  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6220]: E1107 17:21:49.180624    6220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.024967  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6231]: E1107 17:21:49.932522    6231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.025340  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:50 kubernetes-upgrade-171701 kubelet[6243]: E1107 17:21:50.680257    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.025689  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:51 kubernetes-upgrade-171701 kubelet[6254]: E1107 17:21:51.435278    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.026038  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6265]: E1107 17:21:52.186539    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.026450  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.026815  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.027202  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.027569  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.027922  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.028283  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:56 kubernetes-upgrade-171701 kubelet[6468]: E1107 17:21:56.681927    6468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.028632  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:57 kubernetes-upgrade-171701 kubelet[6480]: E1107 17:21:57.432096    6480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.028980  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6491]: E1107 17:21:58.181940    6491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.029338  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6502]: E1107 17:21:58.931585    6502 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.029687  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:59 kubernetes-upgrade-171701 kubelet[6514]: E1107 17:21:59.684165    6514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.030043  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:00 kubernetes-upgrade-171701 kubelet[6526]: E1107 17:22:00.432499    6526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.030441  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6537]: E1107 17:22:01.183357    6537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.030795  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6549]: E1107 17:22:01.932810    6549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.031141  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:02 kubernetes-upgrade-171701 kubelet[6560]: E1107 17:22:02.684840    6560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.031496  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:03 kubernetes-upgrade-171701 kubelet[6571]: E1107 17:22:03.439682    6571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.031847  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6582]: E1107 17:22:04.182110    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.032195  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6593]: E1107 17:22:04.933136    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.032581  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:05 kubernetes-upgrade-171701 kubelet[6604]: E1107 17:22:05.681006    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.032929  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:06 kubernetes-upgrade-171701 kubelet[6691]: E1107 17:22:06.440696    6691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.033282  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:07 kubernetes-upgrade-171701 kubelet[6762]: E1107 17:22:07.183012    6762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.033653  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:07 kubernetes-upgrade-171701 kubelet[6773]: E1107 17:22:07.935962    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.034091  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:08 kubernetes-upgrade-171701 kubelet[6784]: E1107 17:22:08.683999    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.034552  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:09 kubernetes-upgrade-171701 kubelet[6796]: E1107 17:22:09.438230    6796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.034929  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:10 kubernetes-upgrade-171701 kubelet[6807]: E1107 17:22:10.181290    6807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.035308  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:10 kubernetes-upgrade-171701 kubelet[6819]: E1107 17:22:10.935547    6819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.035681  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:11 kubernetes-upgrade-171701 kubelet[6831]: E1107 17:22:11.682646    6831 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.036058  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:12 kubernetes-upgrade-171701 kubelet[6842]: E1107 17:22:12.434082    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.036425  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6853]: E1107 17:22:13.182860    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.036797  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6864]: E1107 17:22:13.934438    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.037168  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:14 kubernetes-upgrade-171701 kubelet[6875]: E1107 17:22:14.683110    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.037535  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:15 kubernetes-upgrade-171701 kubelet[6886]: E1107 17:22:15.449668    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.037910  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6896]: E1107 17:22:16.196685    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.038294  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6966]: E1107 17:22:16.953919    6966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:17.038452  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:22:17.038471  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:22:17.056292  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:22:17.056316  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:22:17.121155  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:22:17.121183  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:22:17.121195  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:22:17.159665  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:22:17.159699  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:22:17.191069  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:17.191101  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:22:17.191243  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:22:17.191256  209319 out.go:239]   Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6864]: E1107 17:22:13.934438    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6864]: E1107 17:22:13.934438    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.191264  209319 out.go:239]   Nov 07 17:22:14 kubernetes-upgrade-171701 kubelet[6875]: E1107 17:22:14.683110    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:14 kubernetes-upgrade-171701 kubelet[6875]: E1107 17:22:14.683110    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.191270  209319 out.go:239]   Nov 07 17:22:15 kubernetes-upgrade-171701 kubelet[6886]: E1107 17:22:15.449668    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:15 kubernetes-upgrade-171701 kubelet[6886]: E1107 17:22:15.449668    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.191277  209319 out.go:239]   Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6896]: E1107 17:22:16.196685    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6896]: E1107 17:22:16.196685    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:17.191283  209319 out.go:239]   Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6966]: E1107 17:22:16.953919    6966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6966]: E1107 17:22:16.953919    6966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:17.191290  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:17.191299  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:22:27.191769  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:22:27.244197  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:22:27.244287  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:22:27.272620  209319 cri.go:87] found id: ""
	I1107 17:22:27.272651  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.272658  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:22:27.272667  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:22:27.272720  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:22:27.297854  209319 cri.go:87] found id: ""
	I1107 17:22:27.297885  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.297894  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:22:27.297902  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:22:27.297960  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:22:27.324779  209319 cri.go:87] found id: ""
	I1107 17:22:27.324806  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.324815  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:22:27.324824  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:22:27.324882  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:22:27.352287  209319 cri.go:87] found id: ""
	I1107 17:22:27.352319  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.352328  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:22:27.352339  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:22:27.352394  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:22:27.378878  209319 cri.go:87] found id: ""
	I1107 17:22:27.378908  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.378916  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:22:27.378925  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:22:27.378984  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:22:27.405386  209319 cri.go:87] found id: ""
	I1107 17:22:27.405415  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.405424  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:22:27.405434  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:22:27.405488  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:22:27.434283  209319 cri.go:87] found id: ""
	I1107 17:22:27.434358  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.434368  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:22:27.434377  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:22:27.434434  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:22:27.468546  209319 cri.go:87] found id: ""
	I1107 17:22:27.468576  209319 logs.go:274] 0 containers: []
	W1107 17:22:27.468586  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:22:27.468598  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:22:27.468616  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:22:27.489072  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:22:27.489110  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:22:27.561955  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:22:27.561989  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:22:27.562004  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:22:27.609043  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:22:27.609089  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:22:27.641768  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:22:27.641808  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:22:27.662881  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:37 kubernetes-upgrade-171701 kubelet[5914]: E1107 17:21:37.933921    5914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.663485  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:38 kubernetes-upgrade-171701 kubelet[5925]: E1107 17:21:38.681885    5925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.663939  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:39 kubernetes-upgrade-171701 kubelet[5936]: E1107 17:21:39.432473    5936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.664429  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5948]: E1107 17:21:40.182154    5948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.664951  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:40 kubernetes-upgrade-171701 kubelet[5959]: E1107 17:21:40.932501    5959 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.665380  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:41 kubernetes-upgrade-171701 kubelet[5970]: E1107 17:21:41.681766    5970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.665785  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:42 kubernetes-upgrade-171701 kubelet[5981]: E1107 17:21:42.432942    5981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.666137  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[5992]: E1107 17:21:43.182862    5992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.666543  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:43 kubernetes-upgrade-171701 kubelet[6003]: E1107 17:21:43.931957    6003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.666889  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:44 kubernetes-upgrade-171701 kubelet[6014]: E1107 17:21:44.681972    6014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.667287  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:45 kubernetes-upgrade-171701 kubelet[6105]: E1107 17:21:45.440951    6105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.667652  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6175]: E1107 17:21:46.181839    6175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.668003  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:46 kubernetes-upgrade-171701 kubelet[6186]: E1107 17:21:46.932298    6186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.668401  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:47 kubernetes-upgrade-171701 kubelet[6197]: E1107 17:21:47.680590    6197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.668819  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:48 kubernetes-upgrade-171701 kubelet[6209]: E1107 17:21:48.431211    6209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.669176  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6220]: E1107 17:21:49.180624    6220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.669550  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:49 kubernetes-upgrade-171701 kubelet[6231]: E1107 17:21:49.932522    6231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.669947  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:50 kubernetes-upgrade-171701 kubelet[6243]: E1107 17:21:50.680257    6243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.670598  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:51 kubernetes-upgrade-171701 kubelet[6254]: E1107 17:21:51.435278    6254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.671207  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6265]: E1107 17:21:52.186539    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.671820  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:52 kubernetes-upgrade-171701 kubelet[6276]: E1107 17:21:52.934800    6276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.672276  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:53 kubernetes-upgrade-171701 kubelet[6286]: E1107 17:21:53.687401    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.672750  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:54 kubernetes-upgrade-171701 kubelet[6297]: E1107 17:21:54.431383    6297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.673175  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6308]: E1107 17:21:55.183511    6308 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.673568  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:55 kubernetes-upgrade-171701 kubelet[6397]: E1107 17:21:55.935934    6397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.674001  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:56 kubernetes-upgrade-171701 kubelet[6468]: E1107 17:21:56.681927    6468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.674476  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:57 kubernetes-upgrade-171701 kubelet[6480]: E1107 17:21:57.432096    6480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.674893  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6491]: E1107 17:21:58.181940    6491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.675288  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:58 kubernetes-upgrade-171701 kubelet[6502]: E1107 17:21:58.931585    6502 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.675702  209319 logs.go:138] Found kubelet problem: Nov 07 17:21:59 kubernetes-upgrade-171701 kubelet[6514]: E1107 17:21:59.684165    6514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.676079  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:00 kubernetes-upgrade-171701 kubelet[6526]: E1107 17:22:00.432499    6526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.676474  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6537]: E1107 17:22:01.183357    6537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.676995  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:01 kubernetes-upgrade-171701 kubelet[6549]: E1107 17:22:01.932810    6549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.677579  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:02 kubernetes-upgrade-171701 kubelet[6560]: E1107 17:22:02.684840    6560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.678050  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:03 kubernetes-upgrade-171701 kubelet[6571]: E1107 17:22:03.439682    6571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.678508  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6582]: E1107 17:22:04.182110    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.678940  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:04 kubernetes-upgrade-171701 kubelet[6593]: E1107 17:22:04.933136    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.679448  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:05 kubernetes-upgrade-171701 kubelet[6604]: E1107 17:22:05.681006    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.680055  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:06 kubernetes-upgrade-171701 kubelet[6691]: E1107 17:22:06.440696    6691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.680634  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:07 kubernetes-upgrade-171701 kubelet[6762]: E1107 17:22:07.183012    6762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.681210  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:07 kubernetes-upgrade-171701 kubelet[6773]: E1107 17:22:07.935962    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.681794  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:08 kubernetes-upgrade-171701 kubelet[6784]: E1107 17:22:08.683999    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.682950  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:09 kubernetes-upgrade-171701 kubelet[6796]: E1107 17:22:09.438230    6796 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.683456  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:10 kubernetes-upgrade-171701 kubelet[6807]: E1107 17:22:10.181290    6807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.684064  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:10 kubernetes-upgrade-171701 kubelet[6819]: E1107 17:22:10.935547    6819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.684668  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:11 kubernetes-upgrade-171701 kubelet[6831]: E1107 17:22:11.682646    6831 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.685281  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:12 kubernetes-upgrade-171701 kubelet[6842]: E1107 17:22:12.434082    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.685892  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6853]: E1107 17:22:13.182860    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.686487  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:13 kubernetes-upgrade-171701 kubelet[6864]: E1107 17:22:13.934438    6864 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.686888  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:14 kubernetes-upgrade-171701 kubelet[6875]: E1107 17:22:14.683110    6875 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.687266  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:15 kubernetes-upgrade-171701 kubelet[6886]: E1107 17:22:15.449668    6886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.687624  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6896]: E1107 17:22:16.196685    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.688134  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:16 kubernetes-upgrade-171701 kubelet[6966]: E1107 17:22:16.953919    6966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.688530  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:17 kubernetes-upgrade-171701 kubelet[7052]: E1107 17:22:17.702283    7052 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.688879  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:18 kubernetes-upgrade-171701 kubelet[7063]: E1107 17:22:18.468392    7063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.689330  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:19 kubernetes-upgrade-171701 kubelet[7074]: E1107 17:22:19.189105    7074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.689777  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:19 kubernetes-upgrade-171701 kubelet[7085]: E1107 17:22:19.958349    7085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.690148  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:20 kubernetes-upgrade-171701 kubelet[7095]: E1107 17:22:20.711217    7095 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.690564  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:21 kubernetes-upgrade-171701 kubelet[7105]: E1107 17:22:21.440609    7105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.691106  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:22 kubernetes-upgrade-171701 kubelet[7117]: E1107 17:22:22.185255    7117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.691562  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:22 kubernetes-upgrade-171701 kubelet[7127]: E1107 17:22:22.937171    7127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.692009  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:23 kubernetes-upgrade-171701 kubelet[7139]: E1107 17:22:23.683494    7139 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.692571  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:24 kubernetes-upgrade-171701 kubelet[7150]: E1107 17:22:24.431907    7150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.693130  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7161]: E1107 17:22:25.181660    7161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.693730  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7172]: E1107 17:22:25.936408    7172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.694370  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:26 kubernetes-upgrade-171701 kubelet[7183]: E1107 17:22:26.686957    7183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.694850  209319 logs.go:138] Found kubelet problem: Nov 07 17:22:27 kubernetes-upgrade-171701 kubelet[7257]: E1107 17:22:27.461364    7257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:27.695013  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:27.695031  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 17:22:27.695170  209319 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1107 17:22:27.695189  209319 out.go:239]   Nov 07 17:22:24 kubernetes-upgrade-171701 kubelet[7150]: E1107 17:22:24.431907    7150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:24 kubernetes-upgrade-171701 kubelet[7150]: E1107 17:22:24.431907    7150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.695196  209319 out.go:239]   Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7161]: E1107 17:22:25.181660    7161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7161]: E1107 17:22:25.181660    7161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.695201  209319 out.go:239]   Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7172]: E1107 17:22:25.936408    7172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:25 kubernetes-upgrade-171701 kubelet[7172]: E1107 17:22:25.936408    7172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.695210  209319 out.go:239]   Nov 07 17:22:26 kubernetes-upgrade-171701 kubelet[7183]: E1107 17:22:26.686957    7183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:26 kubernetes-upgrade-171701 kubelet[7183]: E1107 17:22:26.686957    7183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:22:27.695223  209319 out.go:239]   Nov 07 17:22:27 kubernetes-upgrade-171701 kubelet[7257]: E1107 17:22:27.461364    7257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 07 17:22:27 kubernetes-upgrade-171701 kubelet[7257]: E1107 17:22:27.461364    7257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:22:27.695231  209319 out.go:309] Setting ErrFile to fd 2...
	I1107 17:22:27.695244  209319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:22:37.696385  209319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:22:37.705224  209319 kubeadm.go:631] restartCluster took 4m11.029867389s
	W1107 17:22:37.705382  209319 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1107 17:22:37.705412  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:22:39.577384  209319 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.871945s)
	I1107 17:22:39.577454  209319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:22:39.589437  209319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:22:39.596376  209319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:22:39.596425  209319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:22:39.603248  209319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:22:39.603300  209319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:22:39.641769  209319 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:22:39.641848  209319 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:22:39.670513  209319 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:22:39.670591  209319 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:22:39.670634  209319 kubeadm.go:317] OS: Linux
	I1107 17:22:39.670706  209319 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:22:39.670791  209319 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:22:39.670871  209319 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:22:39.670940  209319 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:22:39.671007  209319 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:22:39.671097  209319 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:22:39.671165  209319 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:22:39.671229  209319 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:22:39.671299  209319 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:22:39.733137  209319 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:22:39.733278  209319 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:22:39.733393  209319 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:22:39.856691  209319 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:22:39.859105  209319 out.go:204]   - Generating certificates and keys ...
	I1107 17:22:39.859260  209319 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 17:22:39.859344  209319 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 17:22:39.859485  209319 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 17:22:39.859600  209319 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 17:22:39.859695  209319 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 17:22:39.859768  209319 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 17:22:39.859849  209319 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 17:22:39.859929  209319 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 17:22:39.860025  209319 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 17:22:39.860118  209319 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 17:22:39.860169  209319 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 17:22:39.860241  209319 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 17:22:40.001768  209319 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 17:22:40.057377  209319 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 17:22:40.485260  209319 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 17:22:40.956902  209319 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 17:22:40.968930  209319 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 17:22:40.969991  209319 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 17:22:40.970084  209319 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 17:22:41.068757  209319 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 17:22:41.071415  209319 out.go:204]   - Booting up control plane ...
	I1107 17:22:41.071572  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 17:22:41.072665  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 17:22:41.073687  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 17:22:41.074420  209319 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 17:22:41.077330  209319 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 17:23:21.077451  209319 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 17:23:21.078014  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:23:21.078277  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:23:26.078661  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:23:26.078879  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:23:36.079120  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:23:36.079351  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:23:56.079558  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:23:56.079737  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:24:36.080499  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:24:36.080699  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:24:36.080709  209319 kubeadm.go:317] 
	I1107 17:24:36.080740  209319 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 17:24:36.080823  209319 kubeadm.go:317] 	timed out waiting for the condition
	I1107 17:24:36.080847  209319 kubeadm.go:317] 
	I1107 17:24:36.080876  209319 kubeadm.go:317] This error is likely caused by:
	I1107 17:24:36.080904  209319 kubeadm.go:317] 	- The kubelet is not running
	I1107 17:24:36.080992  209319 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 17:24:36.081001  209319 kubeadm.go:317] 
	I1107 17:24:36.081117  209319 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 17:24:36.081147  209319 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 17:24:36.081175  209319 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 17:24:36.081182  209319 kubeadm.go:317] 
	I1107 17:24:36.081293  209319 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 17:24:36.081420  209319 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 17:24:36.081509  209319 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1107 17:24:36.081628  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1107 17:24:36.081698  209319 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 17:24:36.081770  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1107 17:24:36.083474  209319 kubeadm.go:317] W1107 17:22:39.636951    8505 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:24:36.083692  209319 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:24:36.083812  209319 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:24:36.083912  209319 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 17:24:36.084003  209319 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 17:24:36.084316  209319 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:22:39.636951    8505 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:22:39.636951    8505 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 17:24:36.084364  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1107 17:24:37.880981  209319 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.79658839s)
	I1107 17:24:37.881050  209319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:24:37.890915  209319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:24:37.890964  209319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:24:37.897952  209319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:24:37.897995  209319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:24:37.937523  209319 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:24:37.937602  209319 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:24:37.965025  209319 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:24:37.965105  209319 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:24:37.965165  209319 kubeadm.go:317] OS: Linux
	I1107 17:24:37.965234  209319 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:24:37.965291  209319 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:24:37.965342  209319 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:24:37.965384  209319 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:24:37.965422  209319 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:24:37.965475  209319 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:24:37.965537  209319 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:24:37.965607  209319 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:24:37.965672  209319 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:24:38.031014  209319 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:24:38.031186  209319 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:24:38.031277  209319 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:24:38.143728  209319 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:24:38.147780  209319 out.go:204]   - Generating certificates and keys ...
	I1107 17:24:38.147915  209319 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 17:24:38.148013  209319 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 17:24:38.148126  209319 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 17:24:38.148224  209319 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 17:24:38.148319  209319 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 17:24:38.148392  209319 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 17:24:38.148495  209319 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 17:24:38.148574  209319 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 17:24:38.148670  209319 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 17:24:38.148740  209319 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 17:24:38.148775  209319 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 17:24:38.148832  209319 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 17:24:38.392034  209319 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 17:24:38.500769  209319 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 17:24:38.621097  209319 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 17:24:38.891414  209319 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 17:24:38.903331  209319 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 17:24:38.904163  209319 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 17:24:38.904271  209319 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 17:24:38.983153  209319 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 17:24:38.985953  209319 out.go:204]   - Booting up control plane ...
	I1107 17:24:38.986115  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 17:24:38.986360  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 17:24:38.988678  209319 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 17:24:38.989424  209319 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 17:24:38.991574  209319 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 17:25:18.992314  209319 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 17:25:18.992733  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:25:18.992921  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:25:23.994016  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:25:23.994240  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:25:33.994796  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:25:33.995085  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:25:53.996092  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:25:53.996302  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:26:33.997479  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:26:33.997681  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:26:33.997691  209319 kubeadm.go:317] 
	I1107 17:26:33.997790  209319 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 17:26:33.997896  209319 kubeadm.go:317] 	timed out waiting for the condition
	I1107 17:26:33.997908  209319 kubeadm.go:317] 
	I1107 17:26:33.997946  209319 kubeadm.go:317] This error is likely caused by:
	I1107 17:26:33.997994  209319 kubeadm.go:317] 	- The kubelet is not running
	I1107 17:26:33.998156  209319 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 17:26:33.998181  209319 kubeadm.go:317] 
	I1107 17:26:33.998286  209319 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 17:26:33.998376  209319 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 17:26:33.998417  209319 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 17:26:33.998434  209319 kubeadm.go:317] 
	I1107 17:26:33.998676  209319 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 17:26:33.998805  209319 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 17:26:33.998945  209319 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1107 17:26:33.999084  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1107 17:26:33.999187  209319 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 17:26:33.999289  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1107 17:26:34.000479  209319 kubeadm.go:317] W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:26:34.000728  209319 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:26:34.000855  209319 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:26:34.000949  209319 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 17:26:34.001034  209319 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:26:34.001148  209319 kubeadm.go:398] StartCluster complete in 8m7.355554636s
	I1107 17:26:34.001194  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:26:34.001254  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:26:34.025316  209319 cri.go:87] found id: ""
	I1107 17:26:34.025341  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.025349  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:26:34.025357  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:26:34.025418  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:26:34.048389  209319 cri.go:87] found id: ""
	I1107 17:26:34.048431  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.048456  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:26:34.048464  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:26:34.048511  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:26:34.071769  209319 cri.go:87] found id: ""
	I1107 17:26:34.071794  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.071800  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:26:34.071806  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:26:34.071852  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:26:34.095098  209319 cri.go:87] found id: ""
	I1107 17:26:34.095129  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.095136  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:26:34.095143  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:26:34.095187  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:26:34.119878  209319 cri.go:87] found id: ""
	I1107 17:26:34.119904  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.119910  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:26:34.119917  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:26:34.119977  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:26:34.144038  209319 cri.go:87] found id: ""
	I1107 17:26:34.144070  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.144077  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:26:34.144084  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:26:34.144137  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:26:34.168381  209319 cri.go:87] found id: ""
	I1107 17:26:34.168409  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.168418  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:26:34.168426  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:26:34.168482  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:26:34.193472  209319 cri.go:87] found id: ""
	I1107 17:26:34.193499  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.193505  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:26:34.193518  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:26:34.193532  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:26:34.252476  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:26:34.252515  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:26:34.252530  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:26:34.307273  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:26:34.307322  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:26:34.334877  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:26:34.334910  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:26:34.351545  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:44 kubernetes-upgrade-171701 kubelet[12470]: E1107 17:25:44.431940   12470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.351917  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12480]: E1107 17:25:45.192163   12480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.352274  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12492]: E1107 17:25:45.940674   12492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.352626  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:46 kubernetes-upgrade-171701 kubelet[12502]: E1107 17:25:46.682041   12502 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353097  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:47 kubernetes-upgrade-171701 kubelet[12513]: E1107 17:25:47.430072   12513 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353609  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:48 kubernetes-upgrade-171701 kubelet[12523]: E1107 17:25:48.182379   12523 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353963  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:48 kubernetes-upgrade-171701 kubelet[12535]: E1107 17:25:48.932626   12535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.354370  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:49 kubernetes-upgrade-171701 kubelet[12546]: E1107 17:25:49.684205   12546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.354744  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:50 kubernetes-upgrade-171701 kubelet[12557]: E1107 17:25:50.433949   12557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355096  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:51 kubernetes-upgrade-171701 kubelet[12568]: E1107 17:25:51.183452   12568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355444  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:51 kubernetes-upgrade-171701 kubelet[12579]: E1107 17:25:51.932730   12579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355787  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:52 kubernetes-upgrade-171701 kubelet[12590]: E1107 17:25:52.683014   12590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356135  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:53 kubernetes-upgrade-171701 kubelet[12601]: E1107 17:25:53.431630   12601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356476  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:54 kubernetes-upgrade-171701 kubelet[12611]: E1107 17:25:54.183300   12611 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356824  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:54 kubernetes-upgrade-171701 kubelet[12623]: E1107 17:25:54.938942   12623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357167  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:55 kubernetes-upgrade-171701 kubelet[12634]: E1107 17:25:55.687275   12634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357519  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:56 kubernetes-upgrade-171701 kubelet[12645]: E1107 17:25:56.435401   12645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357871  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:57 kubernetes-upgrade-171701 kubelet[12657]: E1107 17:25:57.184653   12657 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358224  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:57 kubernetes-upgrade-171701 kubelet[12668]: E1107 17:25:57.933836   12668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358600  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:58 kubernetes-upgrade-171701 kubelet[12680]: E1107 17:25:58.701311   12680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358954  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:59 kubernetes-upgrade-171701 kubelet[12691]: E1107 17:25:59.466815   12691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.359303  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:00 kubernetes-upgrade-171701 kubelet[12703]: E1107 17:26:00.184802   12703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.359653  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:00 kubernetes-upgrade-171701 kubelet[12715]: E1107 17:26:00.951202   12715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360002  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:01 kubernetes-upgrade-171701 kubelet[12726]: E1107 17:26:01.686955   12726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360350  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:02 kubernetes-upgrade-171701 kubelet[12737]: E1107 17:26:02.432236   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360727  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:03 kubernetes-upgrade-171701 kubelet[12747]: E1107 17:26:03.183545   12747 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361078  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:03 kubernetes-upgrade-171701 kubelet[12758]: E1107 17:26:03.946846   12758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361430  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:04 kubernetes-upgrade-171701 kubelet[12769]: E1107 17:26:04.687960   12769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361781  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:05 kubernetes-upgrade-171701 kubelet[12780]: E1107 17:26:05.438874   12780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362138  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:06 kubernetes-upgrade-171701 kubelet[12791]: E1107 17:26:06.189095   12791 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362508  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:06 kubernetes-upgrade-171701 kubelet[12801]: E1107 17:26:06.933512   12801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362865  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:07 kubernetes-upgrade-171701 kubelet[12813]: E1107 17:26:07.689975   12813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363210  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:08 kubernetes-upgrade-171701 kubelet[12824]: E1107 17:26:08.432978   12824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363562  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:09 kubernetes-upgrade-171701 kubelet[12835]: E1107 17:26:09.191366   12835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363915  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:09 kubernetes-upgrade-171701 kubelet[12846]: E1107 17:26:09.938257   12846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364263  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:10 kubernetes-upgrade-171701 kubelet[12856]: E1107 17:26:10.691595   12856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364611  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:11 kubernetes-upgrade-171701 kubelet[12866]: E1107 17:26:11.431179   12866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364962  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:12 kubernetes-upgrade-171701 kubelet[12877]: E1107 17:26:12.185124   12877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.365313  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:12 kubernetes-upgrade-171701 kubelet[12889]: E1107 17:26:12.933722   12889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.365664  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:13 kubernetes-upgrade-171701 kubelet[12900]: E1107 17:26:13.684135   12900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366035  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:14 kubernetes-upgrade-171701 kubelet[12911]: E1107 17:26:14.431226   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366427  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:15 kubernetes-upgrade-171701 kubelet[12922]: E1107 17:26:15.181664   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366786  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:15 kubernetes-upgrade-171701 kubelet[12933]: E1107 17:26:15.934202   12933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367135  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:16 kubernetes-upgrade-171701 kubelet[12945]: E1107 17:26:16.684791   12945 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367483  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:17 kubernetes-upgrade-171701 kubelet[12957]: E1107 17:26:17.430230   12957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367831  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:18 kubernetes-upgrade-171701 kubelet[12968]: E1107 17:26:18.181957   12968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368184  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:18 kubernetes-upgrade-171701 kubelet[12978]: E1107 17:26:18.933179   12978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368532  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:19 kubernetes-upgrade-171701 kubelet[12989]: E1107 17:26:19.685038   12989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368885  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:20 kubernetes-upgrade-171701 kubelet[13001]: E1107 17:26:20.431864   13001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369240  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:21 kubernetes-upgrade-171701 kubelet[13012]: E1107 17:26:21.183503   13012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369589  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:21 kubernetes-upgrade-171701 kubelet[13023]: E1107 17:26:21.933298   13023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369937  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:22 kubernetes-upgrade-171701 kubelet[13034]: E1107 17:26:22.684214   13034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.370290  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:23 kubernetes-upgrade-171701 kubelet[13045]: E1107 17:26:23.436495   13045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.370665  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:24 kubernetes-upgrade-171701 kubelet[13056]: E1107 17:26:24.186039   13056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371020  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:24 kubernetes-upgrade-171701 kubelet[13067]: E1107 17:26:24.936206   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371367  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:25 kubernetes-upgrade-171701 kubelet[13077]: E1107 17:26:25.694649   13077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371716  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:26 kubernetes-upgrade-171701 kubelet[13088]: E1107 17:26:26.437942   13088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372070  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:27 kubernetes-upgrade-171701 kubelet[13099]: E1107 17:26:27.185891   13099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372532  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:27 kubernetes-upgrade-171701 kubelet[13109]: E1107 17:26:27.932748   13109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372887  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:28 kubernetes-upgrade-171701 kubelet[13120]: E1107 17:26:28.687712   13120 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373248  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:29 kubernetes-upgrade-171701 kubelet[13131]: E1107 17:26:29.440360   13131 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373601  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:30 kubernetes-upgrade-171701 kubelet[13143]: E1107 17:26:30.191315   13143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373946  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:30 kubernetes-upgrade-171701 kubelet[13154]: E1107 17:26:30.933465   13154 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.374297  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:31 kubernetes-upgrade-171701 kubelet[13165]: E1107 17:26:31.693566   13165 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.374701  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:32 kubernetes-upgrade-171701 kubelet[13175]: E1107 17:26:32.438004   13175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.375069  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:33 kubernetes-upgrade-171701 kubelet[13186]: E1107 17:26:33.187617   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.375418  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:33 kubernetes-upgrade-171701 kubelet[13196]: E1107 17:26:33.933584   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.375537  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:26:34.375553  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1107 17:26:34.393748  209319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 17:26:34.393806  209319 out.go:239] * 
	* 
	W1107 17:26:34.394054  209319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:26:34.394089  209319 out.go:239] * 
	* 
	W1107 17:26:34.395058  209319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:26:34.465899  209319 out.go:177] X Problems detected in kubelet:
	I1107 17:26:34.528428  209319 out.go:177]   Nov 07 17:25:44 kubernetes-upgrade-171701 kubelet[12470]: E1107 17:25:44.431940   12470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.600496  209319 out.go:177]   Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12480]: E1107 17:25:45.192163   12480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.707077  209319 out.go:177]   Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12492]: E1107 17:25:45.940674   12492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.743787  209319 out.go:177] 
	W1107 17:26:34.758456  209319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:26:34.758593  209319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 17:26:34.758655  209319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 17:26:34.765814  209319 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-171701 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-171701 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-171701 version --output=json: exit status 1 (50.410441ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "25",
	    "gitVersion": "v1.25.3",
	    "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	    "gitTreeState": "clean",
	    "buildDate": "2022-10-12T10:57:26Z",
	    "goVersion": "go1.19.2",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-11-07 17:26:35.206186224 +0000 UTC m=+2469.500603717
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-171701
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-171701:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e",
	        "Created": "2022-11-07T17:17:08.749520373Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:17:49.872941873Z",
	            "FinishedAt": "2022-11-07T17:17:48.097514506Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e/hostname",
	        "HostsPath": "/var/lib/docker/containers/46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e/hosts",
	        "LogPath": "/var/lib/docker/containers/46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e/46465fe84ba16eeb71261ba8ccdd15e356143d5d8d2ec808edbdd2e5c993129e-json.log",
	        "Name": "/kubernetes-upgrade-171701",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-171701:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-171701",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6f1ef85c3c6616b97a8b582e3dbf1123cff4cbf70a08468e5314dba9f9e9eb21-init/diff:/var/lib/docker/overlay2/50f34786c57872c77d74fc1e1bfc5c830eecdaaa307731f7f0968ecd4a1f1563/diff:/var/lib/docker/overlay2/7bd2077ca57b1a9d268f813d36a75f7979f1fc4acedca337c909926df0984abc/diff:/var/lib/docker/overlay2/fc584b8d731e3e1a78208322d9ad4f5e4ad9c3bcaa0f08927b91ce3c8637e0c1/diff:/var/lib/docker/overlay2/b1015b3e809f7445f186f197e10ccde2f6313a9c6860e2a15469f8efb401040d/diff:/var/lib/docker/overlay2/c333cad43ceb2005c0c4df6e6055a141624b85a82498fdd043cc72ccb83232a2/diff:/var/lib/docker/overlay2/e8adaa498090aa250a4bb91e7b41283b97dd43550202038f2ba75fb6fce1963e/diff:/var/lib/docker/overlay2/21ee34913cc32f41efb30d896d169ee516ce1865cdf9ed62125bad1d7b760ebf/diff:/var/lib/docker/overlay2/1b1e3fc8fc878d0731cfc2e081355a9d88e2832592699aec0d7fdef0b4aa2536/diff:/var/lib/docker/overlay2/4b91e729bf04aac130fb8d8bfcab139c95e0ef3f6a774013de6b68a489234ec6/diff:/var/lib/docker/overlay2/4fa234
40214db584cc2d06610d07177bcb3f52aaa6485fc6d0c5fe8830500eb8/diff:/var/lib/docker/overlay2/16748108f66ccb40a4a3b20805c0085d2865c56f7f76ef79cad24498e9ffe9d0/diff:/var/lib/docker/overlay2/ed8e95539c1661d85da89eceddad9e582c9ea46b80010c6f68d080d92c9d6b5a/diff:/var/lib/docker/overlay2/df5567a2898a9e8a1be97266503eb95798b79e37668e3073e7f439219defa1b1/diff:/var/lib/docker/overlay2/b70d157c56a0610efd610495efa704a0548753e54dc2f98f56c33b18d5bdb831/diff:/var/lib/docker/overlay2/3a1efa8a7fda429b96ee67adce9f25aa586838fff1d0e33a145074eb35f92e3b/diff:/var/lib/docker/overlay2/adec1560668aa1c06d2f672622d778fb7c7a9958814773573f9b9bd167f6c860/diff:/var/lib/docker/overlay2/b092628cb8f256d44c2fbb9ae9bccaf57d2d6209aa4f402d78256949eae7feb3/diff:/var/lib/docker/overlay2/3356cfa5fa7047a97e9c2b7cb8952bdbe042be5633202a2fb86fb78eb24d01c3/diff:/var/lib/docker/overlay2/e2eda1c37c57f4adc2cf7cba48eed6c8ffe3d2f47e31c07d647fd0597cb1aaee/diff:/var/lib/docker/overlay2/0fdab607cc4d78cb0a3fbd3041f4d6f1fabd525b190ca8fe214ce0d708a7f772/diff:/var/lib/d
ocker/overlay2/746235f8e2202d20a55b5a9fea42575d53cbce903cd7196f79b6546eb912216c/diff:/var/lib/docker/overlay2/bb90b859707e89d2d71c36f1d9688d6b09d32b9fce71c1a4caffab9be2bbb188/diff:/var/lib/docker/overlay2/10fdb9cfaf7ec1249107401913d80e6952d57412f21964005f33a1ec0edbc3bc/diff:/var/lib/docker/overlay2/c1af211c834a44cc9932c4e3a12691a9d1d7c2e14e241cb5a8b881d40534523f/diff:/var/lib/docker/overlay2/de7a70af2c1a55113b9be8a92239749d35dd866bda013a8048f5bccbc98a258d/diff:/var/lib/docker/overlay2/638ba6771779e36e94f47227270733bc19e786d6084420c1cb46c8d942883a6b/diff:/var/lib/docker/overlay2/f4e0800cf49a41c3993c1d146cd1613cacaf8996e27b642bc6359f30ae301891/diff:/var/lib/docker/overlay2/0c8275272897551e4e3bd4a403ea631396d4e226e0d1524a973391b15b868f09/diff:/var/lib/docker/overlay2/405eea0895fd24bd6bcbfa316e2f2f55186a3a8c11836a41776b7078210cef3e/diff:/var/lib/docker/overlay2/5344d9cb5a12ef430d7c5246346fdf0be30cf22430cea41ce3eeff0db5b4d629/diff:/var/lib/docker/overlay2/3a1aae2d89cdb6efed9f25c1aa5fc3b09afd34de1dea7ab15bbf250d2c1
ccaeb/diff:/var/lib/docker/overlay2/fe4503be964576b1bd1b38c1789d575ebd1d3a40807fc8fddd0d03689f815101/diff:/var/lib/docker/overlay2/cd964cc10ac76d7d224e0c14361f663890fb1aa42543b9e6aad6231ce574ab75/diff:/var/lib/docker/overlay2/d3b7495eb871dc08a1299ff6623317982ae4fcb245a496232f5ecb3c7db2f65e/diff:/var/lib/docker/overlay2/f47e602141e8a2a0110308ae1e12d31d503b156f1438454b031a4428e38d6fdf/diff:/var/lib/docker/overlay2/2fa5513e215c12fbae0f66df8f9239d68407115fc99d2d61fad469cab8e90074/diff:/var/lib/docker/overlay2/35a81d0664a9558cbb797f91f0936edc4dc40d04124e0e087016a1965853fd34/diff:/var/lib/docker/overlay2/0335b50ae6313640c86195beb2c170e6024ff55e7e7c5d4799d3fb36388be83a/diff:/var/lib/docker/overlay2/4756e235309d1e95924ec8f07ff825ebdcd7384760cb06121fcb6299bbad2e5c/diff:/var/lib/docker/overlay2/b3a9deb3bf75ddb8b41c22ba322da02c3379475903d07dd985bcef4a317a514a/diff:/var/lib/docker/overlay2/2e829bbc0c18a173f30f9904a6e0a3b3dd0b06b9f8e518ddcf6d4b8237876fb8/diff:/var/lib/docker/overlay2/eaf774e8177ba46b1b9f087012edcc4e413aa6
e302e711cb62dae1ca92ac7b5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f1ef85c3c6616b97a8b582e3dbf1123cff4cbf70a08468e5314dba9f9e9eb21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f1ef85c3c6616b97a8b582e3dbf1123cff4cbf70a08468e5314dba9f9e9eb21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f1ef85c3c6616b97a8b582e3dbf1123cff4cbf70a08468e5314dba9f9e9eb21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-171701",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-171701/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-171701",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-171701",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-171701",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02f1e0e3dd0aebb67579eccb77fd6ae0d5eaf1d79d1f7ec914b2e3066387d7d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49342"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49338"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49340"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49339"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/02f1e0e3dd0a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-171701": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "46465fe84ba1",
	                        "kubernetes-upgrade-171701"
	                    ],
	                    "NetworkID": "68d12ec83c156cd19210682007576787ae34652bdab8fcd8b4595678ab01160b",
	                    "EndpointID": "d11920e34f631eee5e8a65522bd5f8a18cc94546f3f504bb06e37d21c34444bf",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171701 -n kubernetes-upgrade-171701
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171701 -n kubernetes-upgrade-171701: exit status 2 (359.515777ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171701 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-171701 logs -n 25: (1.147312257s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-171908                         | force-systemd-flag-171908    | jenkins | v1.28.0 | 07 Nov 22 17:19 UTC | 07 Nov 22 17:19 UTC |
	|         | ssh cat                                           |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                       |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-171908                      | force-systemd-flag-171908    | jenkins | v1.28.0 | 07 Nov 22 17:19 UTC | 07 Nov 22 17:19 UTC |
	| start   | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:19 UTC | 07 Nov 22 17:20 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-171935        | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:20 UTC | 07 Nov 22 17:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:20 UTC | 07 Nov 22 17:20 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-171935             | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:20 UTC | 07 Nov 22 17:20 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:20 UTC | 07 Nov 22 17:26 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171920   | old-k8s-version-171920       | jenkins | v1.28.0 | 07 Nov 22 17:21 UTC | 07 Nov 22 17:21 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171920                         | old-k8s-version-171920       | jenkins | v1.28.0 | 07 Nov 22 17:21 UTC | 07 Nov 22 17:21 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171920        | old-k8s-version-171920       | jenkins | v1.28.0 | 07 Nov 22 17:21 UTC | 07 Nov 22 17:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171920                         | old-k8s-version-171920       | jenkins | v1.28.0 | 07 Nov 22 17:21 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-171827                         | cert-expiration-171827       | jenkins | v1.28.0 | 07 Nov 22 17:22 UTC | 07 Nov 22 17:22 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-171827                         | cert-expiration-171827       | jenkins | v1.28.0 | 07 Nov 22 17:22 UTC | 07 Nov 22 17:22 UTC |
	| start   | -p embed-certs-172219                             | embed-certs-172219           | jenkins | v1.28.0 | 07 Nov 22 17:22 UTC | 07 Nov 22 17:23 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-172219       | embed-certs-172219           | jenkins | v1.28.0 | 07 Nov 22 17:23 UTC | 07 Nov 22 17:23 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-172219                             | embed-certs-172219           | jenkins | v1.28.0 | 07 Nov 22 17:23 UTC | 07 Nov 22 17:23 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-172219            | embed-certs-172219           | jenkins | v1.28.0 | 07 Nov 22 17:23 UTC | 07 Nov 22 17:23 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-172219                             | embed-certs-172219           | jenkins | v1.28.0 | 07 Nov 22 17:23 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-171935 sudo                         | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	| delete  | -p no-preload-171935                              | no-preload-171935            | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	| delete  | -p                                                | disable-driver-mounts-172629 | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC | 07 Nov 22 17:26 UTC |
	|         | disable-driver-mounts-172629                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-172629 | jenkins | v1.28.0 | 07 Nov 22 17:26 UTC |                     |
	|         | default-k8s-diff-port-172629                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 17:26:29
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 17:26:29.634035  270513 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:26:29.634157  270513 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:26:29.634169  270513 out.go:309] Setting ErrFile to fd 2...
	I1107 17:26:29.634175  270513 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:26:29.634281  270513 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:26:29.634909  270513 out.go:303] Setting JSON to false
	I1107 17:26:29.636309  270513 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11343,"bootTime":1667830647,"procs":434,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:26:29.636379  270513 start.go:126] virtualization: kvm guest
	I1107 17:26:29.639169  270513 out.go:177] * [default-k8s-diff-port-172629] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:26:29.640783  270513 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:26:29.640703  270513 notify.go:220] Checking for updates...
	I1107 17:26:29.642342  270513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:26:29.644189  270513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:26:29.645829  270513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:26:29.649287  270513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:26:29.651185  270513 config.go:180] Loaded profile config "embed-certs-172219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:26:29.651296  270513 config.go:180] Loaded profile config "kubernetes-upgrade-171701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:26:29.651408  270513 config.go:180] Loaded profile config "old-k8s-version-171920": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1107 17:26:29.651459  270513 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:26:29.681010  270513 docker.go:137] docker version: linux-20.10.21
	I1107 17:26:29.681085  270513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:26:29.781191  270513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-11-07 17:26:29.701810158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:26:29.781291  270513 docker.go:254] overlay module found
	I1107 17:26:29.783306  270513 out.go:177] * Using the docker driver based on user configuration
	I1107 17:26:29.784756  270513 start.go:282] selected driver: docker
	I1107 17:26:29.784825  270513 start.go:808] validating driver "docker" against <nil>
	I1107 17:26:29.784861  270513 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:26:29.785982  270513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:26:29.881701  270513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-11-07 17:26:29.804788879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:26:29.881817  270513 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 17:26:29.882001  270513 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:26:29.884003  270513 out.go:177] * Using Docker driver with root privileges
	I1107 17:26:29.885622  270513 cni.go:95] Creating CNI manager for ""
	I1107 17:26:29.885636  270513 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 17:26:29.885650  270513 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 17:26:29.885682  270513 start_flags.go:317] config:
	{Name:default-k8s-diff-port-172629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-172629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:26:29.887584  270513 out.go:177] * Starting control plane node default-k8s-diff-port-172629 in cluster default-k8s-diff-port-172629
	I1107 17:26:29.889112  270513 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 17:26:29.890532  270513 out.go:177] * Pulling base image ...
	I1107 17:26:29.891816  270513 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:26:29.891863  270513 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1107 17:26:29.891874  270513 cache.go:57] Caching tarball of preloaded images
	I1107 17:26:29.891904  270513 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:26:29.892165  270513 preload.go:174] Found /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:26:29.892193  270513 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1107 17:26:29.892315  270513 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/default-k8s-diff-port-172629/config.json ...
	I1107 17:26:29.892342  270513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/default-k8s-diff-port-172629/config.json: {Name:mk49da6fe801872db65f2b0d73a0d3ff9892db37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:26:29.915746  270513 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:26:29.915776  270513 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:26:29.915792  270513 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:26:29.915822  270513 start.go:364] acquiring machines lock for default-k8s-diff-port-172629: {Name:mk4c0fd6314ef76eb6f7c6c43aa0e98baff10b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:26:29.915937  270513 start.go:368] acquired machines lock for "default-k8s-diff-port-172629" in 95.726µs
	I1107 17:26:29.915960  270513 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-172629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-172629 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 17:26:29.916076  270513 start.go:125] createHost starting for "" (driver="docker")
	I1107 17:26:28.674299  257245 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-x8kg5" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:31.172333  257245 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-x8kg5" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:28.985407  245984 pod_ready.go:102] pod "metrics-server-7958775c-nz4nc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:31.485924  245984 pod_ready.go:102] pod "metrics-server-7958775c-nz4nc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:33.997479  209319 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 17:26:33.997681  209319 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 17:26:33.997691  209319 kubeadm.go:317] 
	I1107 17:26:33.997790  209319 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 17:26:33.997896  209319 kubeadm.go:317] 	timed out waiting for the condition
	I1107 17:26:33.997908  209319 kubeadm.go:317] 
	I1107 17:26:33.997946  209319 kubeadm.go:317] This error is likely caused by:
	I1107 17:26:33.997994  209319 kubeadm.go:317] 	- The kubelet is not running
	I1107 17:26:33.998156  209319 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 17:26:33.998181  209319 kubeadm.go:317] 
	I1107 17:26:33.998286  209319 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 17:26:33.998376  209319 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 17:26:33.998417  209319 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 17:26:33.998434  209319 kubeadm.go:317] 
	I1107 17:26:33.998676  209319 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 17:26:33.998805  209319 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 17:26:33.998945  209319 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1107 17:26:33.999084  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1107 17:26:33.999187  209319 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 17:26:33.999289  209319 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1107 17:26:34.000479  209319 kubeadm.go:317] W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:26:34.000728  209319 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:26:34.000855  209319 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:26:34.000949  209319 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 17:26:34.001034  209319 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 17:26:34.001148  209319 kubeadm.go:398] StartCluster complete in 8m7.355554636s
	I1107 17:26:34.001194  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1107 17:26:34.001254  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 17:26:34.025316  209319 cri.go:87] found id: ""
	I1107 17:26:34.025341  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.025349  209319 logs.go:276] No container was found matching "kube-apiserver"
	I1107 17:26:34.025357  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1107 17:26:34.025418  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 17:26:34.048389  209319 cri.go:87] found id: ""
	I1107 17:26:34.048431  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.048456  209319 logs.go:276] No container was found matching "etcd"
	I1107 17:26:34.048464  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1107 17:26:34.048511  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 17:26:34.071769  209319 cri.go:87] found id: ""
	I1107 17:26:34.071794  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.071800  209319 logs.go:276] No container was found matching "coredns"
	I1107 17:26:34.071806  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1107 17:26:34.071852  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 17:26:34.095098  209319 cri.go:87] found id: ""
	I1107 17:26:34.095129  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.095136  209319 logs.go:276] No container was found matching "kube-scheduler"
	I1107 17:26:34.095143  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1107 17:26:34.095187  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 17:26:34.119878  209319 cri.go:87] found id: ""
	I1107 17:26:34.119904  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.119910  209319 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:26:34.119917  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1107 17:26:34.119977  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1107 17:26:34.144038  209319 cri.go:87] found id: ""
	I1107 17:26:34.144070  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.144077  209319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:26:34.144084  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1107 17:26:34.144137  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1107 17:26:34.168381  209319 cri.go:87] found id: ""
	I1107 17:26:34.168409  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.168418  209319 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:26:34.168426  209319 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 17:26:34.168482  209319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 17:26:34.193472  209319 cri.go:87] found id: ""
	I1107 17:26:34.193499  209319 logs.go:274] 0 containers: []
	W1107 17:26:34.193505  209319 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 17:26:34.193518  209319 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:26:34.193532  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:26:34.252476  209319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:26:34.252515  209319 logs.go:123] Gathering logs for containerd ...
	I1107 17:26:34.252530  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1107 17:26:34.307273  209319 logs.go:123] Gathering logs for container status ...
	I1107 17:26:34.307322  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:26:34.334877  209319 logs.go:123] Gathering logs for kubelet ...
	I1107 17:26:34.334910  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 17:26:34.351545  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:44 kubernetes-upgrade-171701 kubelet[12470]: E1107 17:25:44.431940   12470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.351917  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12480]: E1107 17:25:45.192163   12480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.352274  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12492]: E1107 17:25:45.940674   12492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.352626  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:46 kubernetes-upgrade-171701 kubelet[12502]: E1107 17:25:46.682041   12502 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353097  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:47 kubernetes-upgrade-171701 kubelet[12513]: E1107 17:25:47.430072   12513 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353609  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:48 kubernetes-upgrade-171701 kubelet[12523]: E1107 17:25:48.182379   12523 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.353963  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:48 kubernetes-upgrade-171701 kubelet[12535]: E1107 17:25:48.932626   12535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.354370  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:49 kubernetes-upgrade-171701 kubelet[12546]: E1107 17:25:49.684205   12546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.354744  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:50 kubernetes-upgrade-171701 kubelet[12557]: E1107 17:25:50.433949   12557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355096  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:51 kubernetes-upgrade-171701 kubelet[12568]: E1107 17:25:51.183452   12568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355444  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:51 kubernetes-upgrade-171701 kubelet[12579]: E1107 17:25:51.932730   12579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.355787  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:52 kubernetes-upgrade-171701 kubelet[12590]: E1107 17:25:52.683014   12590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356135  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:53 kubernetes-upgrade-171701 kubelet[12601]: E1107 17:25:53.431630   12601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356476  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:54 kubernetes-upgrade-171701 kubelet[12611]: E1107 17:25:54.183300   12611 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.356824  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:54 kubernetes-upgrade-171701 kubelet[12623]: E1107 17:25:54.938942   12623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357167  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:55 kubernetes-upgrade-171701 kubelet[12634]: E1107 17:25:55.687275   12634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357519  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:56 kubernetes-upgrade-171701 kubelet[12645]: E1107 17:25:56.435401   12645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.357871  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:57 kubernetes-upgrade-171701 kubelet[12657]: E1107 17:25:57.184653   12657 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358224  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:57 kubernetes-upgrade-171701 kubelet[12668]: E1107 17:25:57.933836   12668 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358600  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:58 kubernetes-upgrade-171701 kubelet[12680]: E1107 17:25:58.701311   12680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.358954  209319 logs.go:138] Found kubelet problem: Nov 07 17:25:59 kubernetes-upgrade-171701 kubelet[12691]: E1107 17:25:59.466815   12691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.359303  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:00 kubernetes-upgrade-171701 kubelet[12703]: E1107 17:26:00.184802   12703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.359653  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:00 kubernetes-upgrade-171701 kubelet[12715]: E1107 17:26:00.951202   12715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360002  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:01 kubernetes-upgrade-171701 kubelet[12726]: E1107 17:26:01.686955   12726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360350  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:02 kubernetes-upgrade-171701 kubelet[12737]: E1107 17:26:02.432236   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.360727  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:03 kubernetes-upgrade-171701 kubelet[12747]: E1107 17:26:03.183545   12747 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361078  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:03 kubernetes-upgrade-171701 kubelet[12758]: E1107 17:26:03.946846   12758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361430  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:04 kubernetes-upgrade-171701 kubelet[12769]: E1107 17:26:04.687960   12769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.361781  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:05 kubernetes-upgrade-171701 kubelet[12780]: E1107 17:26:05.438874   12780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362138  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:06 kubernetes-upgrade-171701 kubelet[12791]: E1107 17:26:06.189095   12791 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362508  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:06 kubernetes-upgrade-171701 kubelet[12801]: E1107 17:26:06.933512   12801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.362865  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:07 kubernetes-upgrade-171701 kubelet[12813]: E1107 17:26:07.689975   12813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363210  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:08 kubernetes-upgrade-171701 kubelet[12824]: E1107 17:26:08.432978   12824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363562  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:09 kubernetes-upgrade-171701 kubelet[12835]: E1107 17:26:09.191366   12835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.363915  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:09 kubernetes-upgrade-171701 kubelet[12846]: E1107 17:26:09.938257   12846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364263  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:10 kubernetes-upgrade-171701 kubelet[12856]: E1107 17:26:10.691595   12856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364611  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:11 kubernetes-upgrade-171701 kubelet[12866]: E1107 17:26:11.431179   12866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.364962  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:12 kubernetes-upgrade-171701 kubelet[12877]: E1107 17:26:12.185124   12877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.365313  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:12 kubernetes-upgrade-171701 kubelet[12889]: E1107 17:26:12.933722   12889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.365664  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:13 kubernetes-upgrade-171701 kubelet[12900]: E1107 17:26:13.684135   12900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366035  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:14 kubernetes-upgrade-171701 kubelet[12911]: E1107 17:26:14.431226   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366427  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:15 kubernetes-upgrade-171701 kubelet[12922]: E1107 17:26:15.181664   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.366786  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:15 kubernetes-upgrade-171701 kubelet[12933]: E1107 17:26:15.934202   12933 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367135  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:16 kubernetes-upgrade-171701 kubelet[12945]: E1107 17:26:16.684791   12945 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367483  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:17 kubernetes-upgrade-171701 kubelet[12957]: E1107 17:26:17.430230   12957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.367831  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:18 kubernetes-upgrade-171701 kubelet[12968]: E1107 17:26:18.181957   12968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368184  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:18 kubernetes-upgrade-171701 kubelet[12978]: E1107 17:26:18.933179   12978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368532  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:19 kubernetes-upgrade-171701 kubelet[12989]: E1107 17:26:19.685038   12989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.368885  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:20 kubernetes-upgrade-171701 kubelet[13001]: E1107 17:26:20.431864   13001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369240  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:21 kubernetes-upgrade-171701 kubelet[13012]: E1107 17:26:21.183503   13012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369589  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:21 kubernetes-upgrade-171701 kubelet[13023]: E1107 17:26:21.933298   13023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.369937  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:22 kubernetes-upgrade-171701 kubelet[13034]: E1107 17:26:22.684214   13034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.370290  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:23 kubernetes-upgrade-171701 kubelet[13045]: E1107 17:26:23.436495   13045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.370665  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:24 kubernetes-upgrade-171701 kubelet[13056]: E1107 17:26:24.186039   13056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371020  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:24 kubernetes-upgrade-171701 kubelet[13067]: E1107 17:26:24.936206   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371367  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:25 kubernetes-upgrade-171701 kubelet[13077]: E1107 17:26:25.694649   13077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.371716  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:26 kubernetes-upgrade-171701 kubelet[13088]: E1107 17:26:26.437942   13088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372070  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:27 kubernetes-upgrade-171701 kubelet[13099]: E1107 17:26:27.185891   13099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372532  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:27 kubernetes-upgrade-171701 kubelet[13109]: E1107 17:26:27.932748   13109 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.372887  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:28 kubernetes-upgrade-171701 kubelet[13120]: E1107 17:26:28.687712   13120 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373248  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:29 kubernetes-upgrade-171701 kubelet[13131]: E1107 17:26:29.440360   13131 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373601  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:30 kubernetes-upgrade-171701 kubelet[13143]: E1107 17:26:30.191315   13143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.373946  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:30 kubernetes-upgrade-171701 kubelet[13154]: E1107 17:26:30.933465   13154 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.374297  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:31 kubernetes-upgrade-171701 kubelet[13165]: E1107 17:26:31.693566   13165 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.374701  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:32 kubernetes-upgrade-171701 kubelet[13175]: E1107 17:26:32.438004   13175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.375069  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:33 kubernetes-upgrade-171701 kubelet[13186]: E1107 17:26:33.187617   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1107 17:26:34.375418  209319 logs.go:138] Found kubelet problem: Nov 07 17:26:33 kubernetes-upgrade-171701 kubelet[13196]: E1107 17:26:33.933584   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.375537  209319 logs.go:123] Gathering logs for dmesg ...
	I1107 17:26:34.375553  209319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1107 17:26:34.393748  209319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 17:26:34.393806  209319 out.go:239] * 
	W1107 17:26:34.394054  209319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:26:34.394089  209319 out.go:239] * 
	W1107 17:26:34.395058  209319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:26:34.465899  209319 out.go:177] X Problems detected in kubelet:
	I1107 17:26:34.528428  209319 out.go:177]   Nov 07 17:25:44 kubernetes-upgrade-171701 kubelet[12470]: E1107 17:25:44.431940   12470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.600496  209319 out.go:177]   Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12480]: E1107 17:25:45.192163   12480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:29.918392  270513 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 17:26:29.918622  270513 start.go:159] libmachine.API.Create for "default-k8s-diff-port-172629" (driver="docker")
	I1107 17:26:29.918652  270513 client.go:168] LocalClient.Create starting
	I1107 17:26:29.918703  270513 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem
	I1107 17:26:29.918736  270513 main.go:134] libmachine: Decoding PEM data...
	I1107 17:26:29.918753  270513 main.go:134] libmachine: Parsing certificate...
	I1107 17:26:29.918818  270513 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem
	I1107 17:26:29.918835  270513 main.go:134] libmachine: Decoding PEM data...
	I1107 17:26:29.918850  270513 main.go:134] libmachine: Parsing certificate...
	I1107 17:26:29.919151  270513 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 17:26:29.940342  270513 cli_runner.go:211] docker network inspect default-k8s-diff-port-172629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 17:26:29.940447  270513 network_create.go:272] running [docker network inspect default-k8s-diff-port-172629] to gather additional debugging logs...
	I1107 17:26:29.940477  270513 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172629
	W1107 17:26:29.961477  270513 cli_runner.go:211] docker network inspect default-k8s-diff-port-172629 returned with exit code 1
	I1107 17:26:29.961515  270513 network_create.go:275] error running [docker network inspect default-k8s-diff-port-172629]: docker network inspect default-k8s-diff-port-172629: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-172629
	I1107 17:26:29.961539  270513 network_create.go:277] output of [docker network inspect default-k8s-diff-port-172629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-172629
	
	** /stderr **
	I1107 17:26:29.961592  270513 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:26:29.984755  270513 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-c60ca185471f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d3:45:89:1f}}
	I1107 17:26:29.985464  270513 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-73d930ae71b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e2:67:c1:53}}
	I1107 17:26:29.986138  270513 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-68d12ec83c15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e4:7d:08:fd}}
	I1107 17:26:29.986972  270513 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-9dfed2f08c41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:9d:a3:de}}
	I1107 17:26:29.987748  270513 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc0009aa368] misses:0}
	I1107 17:26:29.987783  270513 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 17:26:29.987793  270513 network_create.go:115] attempt to create docker network default-k8s-diff-port-172629 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1107 17:26:29.987841  270513 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-172629 default-k8s-diff-port-172629
	I1107 17:26:30.047920  270513 network_create.go:99] docker network default-k8s-diff-port-172629 192.168.85.0/24 created
	I1107 17:26:30.047951  270513 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-172629" container
	I1107 17:26:30.048005  270513 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 17:26:30.071007  270513 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-172629 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-172629 --label created_by.minikube.sigs.k8s.io=true
	I1107 17:26:30.093143  270513 oci.go:103] Successfully created a docker volume default-k8s-diff-port-172629
	I1107 17:26:30.093232  270513 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-172629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-172629 --entrypoint /usr/bin/test -v default-k8s-diff-port-172629:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 17:26:30.669437  270513 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-172629
	I1107 17:26:30.669494  270513 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:26:30.669519  270513 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 17:26:30.669608  270513 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-172629:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 17:26:34.707077  209319 out.go:177]   Nov 07 17:25:45 kubernetes-upgrade-171701 kubelet[12492]: E1107 17:25:45.940674   12492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1107 17:26:34.743787  209319 out.go:177] 
	W1107 17:26:34.758456  209319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1107 17:24:37.932315   11363 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 17:26:34.758593  209319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 17:26:34.758655  209319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 17:26:34.765814  209319 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2022-11-07 17:17:50 UTC, end at Mon 2022-11-07 17:26:36 UTC. --
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.680684252Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.696371258Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.696426052Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.712841033Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.712893909Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.728789857Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.728851102Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.745985684Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.746040888Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.761755705Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.761814542Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.778218916Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.778269223Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.794616979Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.794672070Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.810303031Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.810376985Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.826932395Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.826992694Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.843519630Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.843574044Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.860404382Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.860462946Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.876753542Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 07 17:24:37 kubernetes-upgrade-171701 containerd[493]: time="2022-11-07T17:24:37.876810392Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +1.031048] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000006] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000006] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000007] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +2.019758] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000004] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000001] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +4.059569] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000008] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000002] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000006] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +8.191134] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000005] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-9c2a689f9c44
	[  +0.000002] ll header: 00000000: 02 42 e7 ff 28 83 02 42 c0 a8 5e 02 08 00
	
	* 
	* ==> kernel <==
	*  17:26:36 up  3:09,  0 users,  load average: 0.70, 2.03, 1.98
	Linux kubernetes-upgrade-171701 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:17:50 UTC, end at Mon 2022-11-07 17:26:36 UTC. --
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:33 kubernetes-upgrade-171701 kubelet[13196]: E1107 17:26:33.933584   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 07 17:26:33 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:26:34 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Nov 07 17:26:34 kubernetes-upgrade-171701 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:34 kubernetes-upgrade-171701 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:34 kubernetes-upgrade-171701 kubelet[13342]: E1107 17:26:34.681944   13342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 07 17:26:34 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 07 17:26:34 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:26:35 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Nov 07 17:26:35 kubernetes-upgrade-171701 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:35 kubernetes-upgrade-171701 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:35 kubernetes-upgrade-171701 kubelet[13354]: E1107 17:26:35.434617   13354 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 07 17:26:35 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 07 17:26:35 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:26:36 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Nov 07 17:26:36 kubernetes-upgrade-171701 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:36 kubernetes-upgrade-171701 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:26:36 kubernetes-upgrade-171701 kubelet[13377]: E1107 17:26:36.182214   13377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 07 17:26:36 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 07 17:26:36 kubernetes-upgrade-171701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:26:36.676866  271419 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171701 -n kubernetes-upgrade-171701
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171701 -n kubernetes-upgrade-171701: exit status 2 (403.771632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-171701" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-171701: (2.215764211s)
--- FAIL: TestKubernetesUpgrade (577.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (516.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-171817 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E1107 17:29:22.808551   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-171817 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m36.689444464s)

                                                
                                                
-- stdout --
	* [calico-171817] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-171817 in cluster calico-171817
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:29:16.933937  305211 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:29:16.934146  305211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:29:16.934158  305211 out.go:309] Setting ErrFile to fd 2...
	I1107 17:29:16.934163  305211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:29:16.934275  305211 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:29:16.934948  305211 out.go:303] Setting JSON to false
	I1107 17:29:16.936502  305211 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11510,"bootTime":1667830647,"procs":693,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:29:16.936576  305211 start.go:126] virtualization: kvm guest
	I1107 17:29:16.939125  305211 out.go:177] * [calico-171817] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:29:16.940761  305211 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:29:16.940711  305211 notify.go:220] Checking for updates...
	I1107 17:29:16.942222  305211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:29:16.943687  305211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:29:16.945333  305211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:29:16.946798  305211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:29:16.949523  305211 config.go:180] Loaded profile config "cilium-171817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:29:16.949684  305211 config.go:180] Loaded profile config "default-k8s-diff-port-172629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:29:16.949799  305211 config.go:180] Loaded profile config "kindnet-171816": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:29:16.949868  305211 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:29:16.993498  305211 docker.go:137] docker version: linux-20.10.21
	I1107 17:29:16.993637  305211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:29:17.157026  305211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:52 SystemTime:2022-11-07 17:29:17.020932823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:29:17.157159  305211 docker.go:254] overlay module found
	I1107 17:29:17.159696  305211 out.go:177] * Using the docker driver based on user configuration
	I1107 17:29:17.162512  305211 start.go:282] selected driver: docker
	I1107 17:29:17.162542  305211 start.go:808] validating driver "docker" against <nil>
	I1107 17:29:17.162571  305211 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:29:17.163781  305211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:29:17.298650  305211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:29:17.189549837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:29:17.298757  305211 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 17:29:17.298951  305211 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:29:17.301078  305211 out.go:177] * Using Docker driver with root privileges
	I1107 17:29:17.302430  305211 cni.go:95] Creating CNI manager for "calico"
	I1107 17:29:17.302464  305211 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1107 17:29:17.302478  305211 start_flags.go:317] config:
	{Name:calico-171817 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:29:17.304066  305211 out.go:177] * Starting control plane node calico-171817 in cluster calico-171817
	I1107 17:29:17.305521  305211 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 17:29:17.307057  305211 out.go:177] * Pulling base image ...
	I1107 17:29:17.308499  305211 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:29:17.308541  305211 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1107 17:29:17.308552  305211 cache.go:57] Caching tarball of preloaded images
	I1107 17:29:17.308604  305211 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:29:17.308789  305211 preload.go:174] Found /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:29:17.308809  305211 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1107 17:29:17.308942  305211 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/config.json ...
	I1107 17:29:17.308974  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/config.json: {Name:mk1bf52f40ee6c52312335c653ba924d24caf3bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:17.334810  305211 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:29:17.334840  305211 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:29:17.334855  305211 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:29:17.334893  305211 start.go:364] acquiring machines lock for calico-171817: {Name:mkd25245fb1eaca2d1207346056746440f2c5c89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:29:17.335013  305211 start.go:368] acquired machines lock for "calico-171817" in 99.116µs
	I1107 17:29:17.335036  305211 start.go:93] Provisioning new machine with config: &{Name:calico-171817 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 17:29:17.335125  305211 start.go:125] createHost starting for "" (driver="docker")
	I1107 17:29:17.337932  305211 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 17:29:17.338215  305211 start.go:159] libmachine.API.Create for "calico-171817" (driver="docker")
	I1107 17:29:17.338247  305211 client.go:168] LocalClient.Create starting
	I1107 17:29:17.338364  305211 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem
	I1107 17:29:17.338412  305211 main.go:134] libmachine: Decoding PEM data...
	I1107 17:29:17.338436  305211 main.go:134] libmachine: Parsing certificate...
	I1107 17:29:17.338505  305211 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem
	I1107 17:29:17.338526  305211 main.go:134] libmachine: Decoding PEM data...
	I1107 17:29:17.338804  305211 main.go:134] libmachine: Parsing certificate...
	I1107 17:29:17.339641  305211 cli_runner.go:164] Run: docker network inspect calico-171817 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 17:29:17.363582  305211 cli_runner.go:211] docker network inspect calico-171817 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 17:29:17.363652  305211 network_create.go:272] running [docker network inspect calico-171817] to gather additional debugging logs...
	I1107 17:29:17.363674  305211 cli_runner.go:164] Run: docker network inspect calico-171817
	W1107 17:29:17.389685  305211 cli_runner.go:211] docker network inspect calico-171817 returned with exit code 1
	I1107 17:29:17.389725  305211 network_create.go:275] error running [docker network inspect calico-171817]: docker network inspect calico-171817: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-171817
	I1107 17:29:17.389743  305211 network_create.go:277] output of [docker network inspect calico-171817]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-171817
	
	** /stderr **
	I1107 17:29:17.389807  305211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:29:17.421428  305211 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-c60ca185471f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d3:45:89:1f}}
	I1107 17:29:17.422305  305211 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-73d930ae71b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e2:67:c1:53}}
	I1107 17:29:17.423460  305211 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-f8302ece2525 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:5e:a5:de:10}}
	I1107 17:29:17.424662  305211 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00054af78] misses:0}
	I1107 17:29:17.424700  305211 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 17:29:17.424716  305211 network_create.go:115] attempt to create docker network calico-171817 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 17:29:17.424775  305211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-171817 calico-171817
	I1107 17:29:17.497742  305211 network_create.go:99] docker network calico-171817 192.168.76.0/24 created
	I1107 17:29:17.497775  305211 kic.go:106] calculated static IP "192.168.76.2" for the "calico-171817" container
	I1107 17:29:17.497836  305211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 17:29:17.533877  305211 cli_runner.go:164] Run: docker volume create calico-171817 --label name.minikube.sigs.k8s.io=calico-171817 --label created_by.minikube.sigs.k8s.io=true
	I1107 17:29:17.561095  305211 oci.go:103] Successfully created a docker volume calico-171817
	I1107 17:29:17.561166  305211 cli_runner.go:164] Run: docker run --rm --name calico-171817-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171817 --entrypoint /usr/bin/test -v calico-171817:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 17:29:18.200226  305211 oci.go:107] Successfully prepared a docker volume calico-171817
	I1107 17:29:18.200270  305211 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:29:18.200293  305211 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 17:29:18.200368  305211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171817:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 17:29:24.320133  305211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171817:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (6.119677544s)
	I1107 17:29:24.320169  305211 kic.go:188] duration metric: took 6.119873 seconds to extract preloaded images to volume
	W1107 17:29:24.320358  305211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 17:29:24.320494  305211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 17:29:24.419832  305211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-171817 --name calico-171817 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171817 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-171817 --network calico-171817 --ip 192.168.76.2 --volume calico-171817:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 17:29:24.793499  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Running}}
	I1107 17:29:24.819434  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:24.842253  305211 cli_runner.go:164] Run: docker exec calico-171817 stat /var/lib/dpkg/alternatives/iptables
	I1107 17:29:24.889466  305211 oci.go:144] the created container "calico-171817" has a running status.
	I1107 17:29:24.889508  305211 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa...
	I1107 17:29:25.453875  305211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 17:29:25.519216  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:25.543287  305211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 17:29:25.543312  305211 kic_runner.go:114] Args: [docker exec --privileged calico-171817 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 17:29:25.604081  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:25.627951  305211 machine.go:88] provisioning docker machine ...
	I1107 17:29:25.627990  305211 ubuntu.go:169] provisioning hostname "calico-171817"
	I1107 17:29:25.628051  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:25.652457  305211 main.go:134] libmachine: Using SSH client type: native
	I1107 17:29:25.652695  305211 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1107 17:29:25.652719  305211 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-171817 && echo "calico-171817" | sudo tee /etc/hostname
	I1107 17:29:25.783097  305211 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-171817
	
	I1107 17:29:25.783168  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:25.805035  305211 main.go:134] libmachine: Using SSH client type: native
	I1107 17:29:25.805202  305211 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1107 17:29:25.805231  305211 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-171817' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-171817/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-171817' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:29:25.918335  305211 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:29:25.918370  305211 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
	I1107 17:29:25.918392  305211 ubuntu.go:177] setting up certificates
	I1107 17:29:25.918402  305211 provision.go:83] configureAuth start
	I1107 17:29:25.918447  305211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171817
	I1107 17:29:25.941245  305211 provision.go:138] copyHostCerts
	I1107 17:29:25.941313  305211 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
	I1107 17:29:25.941325  305211 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
	I1107 17:29:25.941390  305211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
	I1107 17:29:25.941458  305211 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
	I1107 17:29:25.941472  305211 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
	I1107 17:29:25.941499  305211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
	I1107 17:29:25.941545  305211 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
	I1107 17:29:25.941553  305211 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
	I1107 17:29:25.941573  305211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
	I1107 17:29:25.941612  305211 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.calico-171817 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-171817]
	I1107 17:29:26.189891  305211 provision.go:172] copyRemoteCerts
	I1107 17:29:26.189946  305211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:29:26.189995  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:26.216397  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:26.301698  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:29:26.320000  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 17:29:26.337605  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 17:29:26.354626  305211 provision.go:86] duration metric: configureAuth took 436.209507ms
	I1107 17:29:26.354657  305211 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:29:26.354817  305211 config.go:180] Loaded profile config "calico-171817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:29:26.354832  305211 machine.go:91] provisioned docker machine in 726.859173ms
	I1107 17:29:26.354838  305211 client.go:171] LocalClient.Create took 9.016585559s
	I1107 17:29:26.354857  305211 start.go:167] duration metric: libmachine.API.Create for "calico-171817" took 9.016642845s
	I1107 17:29:26.354871  305211 start.go:300] post-start starting for "calico-171817" (driver="docker")
	I1107 17:29:26.354883  305211 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:29:26.354936  305211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:29:26.354983  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:26.377262  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:26.462487  305211 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:29:26.465326  305211 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:29:26.465359  305211 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:29:26.465379  305211 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:29:26.465388  305211 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:29:26.465401  305211 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
	I1107 17:29:26.465449  305211 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
	I1107 17:29:26.465513  305211 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
	I1107 17:29:26.465594  305211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:29:26.472433  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:29:26.491678  305211 start.go:303] post-start completed in 136.790308ms
	I1107 17:29:26.491972  305211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171817
	I1107 17:29:26.517427  305211 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/config.json ...
	I1107 17:29:26.517708  305211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:29:26.517764  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:26.541461  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:26.623219  305211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:29:26.627240  305211 start.go:128] duration metric: createHost completed in 9.292101487s
	I1107 17:29:26.627266  305211 start.go:83] releasing machines lock for "calico-171817", held for 9.292239664s
	I1107 17:29:26.627354  305211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171817
	I1107 17:29:26.651297  305211 ssh_runner.go:195] Run: systemctl --version
	I1107 17:29:26.651346  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:26.651410  305211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:29:26.651506  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:26.675216  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:26.676373  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:26.762653  305211 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 17:29:26.791921  305211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:29:26.801402  305211 docker.go:189] disabling docker service ...
	I1107 17:29:26.801465  305211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 17:29:26.818710  305211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 17:29:26.828450  305211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 17:29:26.911326  305211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 17:29:26.996500  305211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 17:29:27.006136  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:29:27.019021  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1107 17:29:27.026761  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1107 17:29:27.035453  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1107 17:29:27.043194  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I1107 17:29:27.050744  305211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 17:29:27.057172  305211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 17:29:27.063899  305211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:29:27.133927  305211 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 17:29:27.201222  305211 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 17:29:27.201297  305211 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 17:29:27.204981  305211 start.go:472] Will wait 60s for crictl version
	I1107 17:29:27.205044  305211 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:29:27.236754  305211 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1107 17:29:27.236828  305211 ssh_runner.go:195] Run: containerd --version
	I1107 17:29:27.265220  305211 ssh_runner.go:195] Run: containerd --version
	I1107 17:29:27.292486  305211 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1107 17:29:27.293915  305211 cli_runner.go:164] Run: docker network inspect calico-171817 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:29:27.316270  305211 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1107 17:29:27.319799  305211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:29:27.329621  305211 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 17:29:27.329709  305211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:29:27.354024  305211 containerd.go:553] all images are preloaded for containerd runtime.
	I1107 17:29:27.354047  305211 containerd.go:467] Images already preloaded, skipping extraction
	I1107 17:29:27.354090  305211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 17:29:27.378682  305211 containerd.go:553] all images are preloaded for containerd runtime.
	I1107 17:29:27.378706  305211 cache_images.go:84] Images are preloaded, skipping loading
	I1107 17:29:27.378755  305211 ssh_runner.go:195] Run: sudo crictl info
	I1107 17:29:27.403928  305211 cni.go:95] Creating CNI manager for "calico"
	I1107 17:29:27.403964  305211 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:29:27.403987  305211 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-171817 NodeName:calico-171817 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:29:27.404136  305211 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-171817"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:29:27.404241  305211 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-171817 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-171817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1107 17:29:27.404307  305211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:29:27.411869  305211 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:29:27.411941  305211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:29:27.419647  305211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I1107 17:29:27.433198  305211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:29:27.447308  305211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I1107 17:29:27.460230  305211 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:29:27.463238  305211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:29:27.472328  305211 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817 for IP: 192.168.76.2
	I1107 17:29:27.472437  305211 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
	I1107 17:29:27.472478  305211 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
	I1107 17:29:27.472525  305211 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.key
	I1107 17:29:27.472540  305211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.crt with IP's: []
	I1107 17:29:27.520215  305211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.crt ...
	I1107 17:29:27.520242  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.crt: {Name:mkbcbaa00736ebdaa397dbfd3f86905d34608c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.520427  305211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.key ...
	I1107 17:29:27.520442  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/client.key: {Name:mkcf988ef898dd408391ad970cc8703ba337d114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.520534  305211 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key.31bdca25
	I1107 17:29:27.520550  305211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 17:29:27.672705  305211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt.31bdca25 ...
	I1107 17:29:27.672736  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt.31bdca25: {Name:mk74dcdbdbb5114a662b626c2c7607aaa535769d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.672939  305211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key.31bdca25 ...
	I1107 17:29:27.672958  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key.31bdca25: {Name:mke57d9d44627c6324ca8326447fe147160e22a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.673056  305211 certs.go:320] copying /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt
	I1107 17:29:27.673110  305211 certs.go:324] copying /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key
	I1107 17:29:27.673156  305211 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.key
	I1107 17:29:27.673169  305211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.crt with IP's: []
	I1107 17:29:27.802684  305211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.crt ...
	I1107 17:29:27.802713  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.crt: {Name:mkdb5b27752718b420f96200026003acb7ff7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.802907  305211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.key ...
	I1107 17:29:27.802920  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.key: {Name:mk8b54440e03f8478b3e75e0b798e209e9df8424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:27.803096  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
	W1107 17:29:27.803137  305211 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
	I1107 17:29:27.803149  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:29:27.803169  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:29:27.803191  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:29:27.803213  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
	I1107 17:29:27.803248  305211 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
	I1107 17:29:27.803778  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:29:27.823063  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 17:29:27.842510  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:29:27.861491  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/calico-171817/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 17:29:27.879334  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:29:27.897979  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 17:29:27.915056  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:29:27.932704  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 17:29:27.950617  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
	I1107 17:29:27.967734  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
	I1107 17:29:27.985452  305211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:29:28.002827  305211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:29:28.015261  305211 ssh_runner.go:195] Run: openssl version
	I1107 17:29:28.020372  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:29:28.027530  305211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:29:28.030481  305211 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:29:28.030519  305211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:29:28.035995  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:29:28.043163  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
	I1107 17:29:28.050307  305211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
	I1107 17:29:28.053415  305211 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/51176.pem
	I1107 17:29:28.053468  305211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
	I1107 17:29:28.058237  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
	I1107 17:29:28.065169  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
	I1107 17:29:28.072721  305211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
	I1107 17:29:28.075695  305211 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/511762.pem
	I1107 17:29:28.075751  305211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
	I1107 17:29:28.080430  305211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:29:28.088344  305211 kubeadm.go:396] StartCluster: {Name:calico-171817 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:29:28.088419  305211 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 17:29:28.088452  305211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 17:29:28.111887  305211 cri.go:87] found id: ""
	I1107 17:29:28.111964  305211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:29:28.118910  305211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:29:28.125553  305211 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:29:28.125600  305211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:29:28.132413  305211 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:29:28.132460  305211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:29:28.175207  305211 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:29:28.175283  305211 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:29:28.206034  305211 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:29:28.206130  305211 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:29:28.206184  305211 kubeadm.go:317] OS: Linux
	I1107 17:29:28.206249  305211 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:29:28.206337  305211 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:29:28.206411  305211 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:29:28.206479  305211 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:29:28.206545  305211 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:29:28.206606  305211 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:29:28.206658  305211 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:29:28.206719  305211 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:29:28.206781  305211 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:29:28.283988  305211 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:29:28.284139  305211 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:29:28.284264  305211 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:29:28.402208  305211 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:29:28.405477  305211 out.go:204]   - Generating certificates and keys ...
	I1107 17:29:28.405600  305211 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 17:29:28.405699  305211 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 17:29:28.489005  305211 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 17:29:28.822362  305211 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 17:29:28.904484  305211 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 17:29:29.156505  305211 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 17:29:29.303526  305211 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 17:29:29.303724  305211 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-171817 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1107 17:29:29.599418  305211 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 17:29:29.599668  305211 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-171817 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1107 17:29:29.837610  305211 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 17:29:30.175026  305211 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 17:29:30.519333  305211 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 17:29:30.519516  305211 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 17:29:30.735787  305211 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 17:29:30.841915  305211 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 17:29:31.003012  305211 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 17:29:31.160966  305211 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 17:29:31.172544  305211 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 17:29:31.175015  305211 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 17:29:31.175112  305211 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 17:29:31.275676  305211 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 17:29:31.277349  305211 out.go:204]   - Booting up control plane ...
	I1107 17:29:31.277476  305211 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 17:29:31.278770  305211 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 17:29:31.279715  305211 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 17:29:31.280557  305211 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 17:29:31.282465  305211 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 17:29:37.785771  305211 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503315 seconds
	I1107 17:29:37.785977  305211 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 17:29:37.795442  305211 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 17:29:38.311508  305211 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 17:29:38.311811  305211 kubeadm.go:317] [mark-control-plane] Marking the node calico-171817 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 17:29:38.822374  305211 kubeadm.go:317] [bootstrap-token] Using token: lgy62j.3jnjq76bnsws0dh9
	I1107 17:29:38.823944  305211 out.go:204]   - Configuring RBAC rules ...
	I1107 17:29:38.824140  305211 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 17:29:38.828306  305211 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 17:29:38.834199  305211 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 17:29:38.836532  305211 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 17:29:38.838795  305211 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 17:29:38.840816  305211 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 17:29:38.848632  305211 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 17:29:39.061719  305211 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1107 17:29:39.233288  305211 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1107 17:29:39.235671  305211 kubeadm.go:317] 
	I1107 17:29:39.235790  305211 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1107 17:29:39.235815  305211 kubeadm.go:317] 
	I1107 17:29:39.235916  305211 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1107 17:29:39.235951  305211 kubeadm.go:317] 
	I1107 17:29:39.236008  305211 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1107 17:29:39.236090  305211 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 17:29:39.236181  305211 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 17:29:39.236188  305211 kubeadm.go:317] 
	I1107 17:29:39.236256  305211 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1107 17:29:39.236264  305211 kubeadm.go:317] 
	I1107 17:29:39.236334  305211 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 17:29:39.236341  305211 kubeadm.go:317] 
	I1107 17:29:39.236394  305211 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1107 17:29:39.236473  305211 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 17:29:39.236567  305211 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 17:29:39.236589  305211 kubeadm.go:317] 
	I1107 17:29:39.236695  305211 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 17:29:39.236782  305211 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1107 17:29:39.236789  305211 kubeadm.go:317] 
	I1107 17:29:39.236880  305211 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token lgy62j.3jnjq76bnsws0dh9 \
	I1107 17:29:39.236982  305211 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:95565ebd46e18d5de21858e7c48a881aa75e06d4bd49a1404ef914eea82ee889 \
	I1107 17:29:39.237004  305211 kubeadm.go:317] 	--control-plane 
	I1107 17:29:39.237008  305211 kubeadm.go:317] 
	I1107 17:29:39.237094  305211 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1107 17:29:39.237100  305211 kubeadm.go:317] 
	I1107 17:29:39.237181  305211 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token lgy62j.3jnjq76bnsws0dh9 \
	I1107 17:29:39.237280  305211 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:95565ebd46e18d5de21858e7c48a881aa75e06d4bd49a1404ef914eea82ee889 
	I1107 17:29:39.240996  305211 kubeadm.go:317] W1107 17:29:28.167366     733 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1107 17:29:39.241279  305211 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:29:39.241474  305211 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:29:39.241497  305211 cni.go:95] Creating CNI manager for "calico"
	I1107 17:29:39.243541  305211 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1107 17:29:39.247381  305211 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 17:29:39.247407  305211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1107 17:29:39.317735  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 17:29:40.734941  305211 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.417156666s)
	I1107 17:29:40.735000  305211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 17:29:40.735108  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=calico-171817 minikube.k8s.io/updated_at=2022_11_07T17_29_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:40.735109  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:40.833840  305211 ops.go:34] apiserver oom_adj: -16
	I1107 17:29:40.833848  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:41.418767  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:41.918918  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:42.418194  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:42.918469  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:43.418830  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:43.919018  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:44.418886  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:44.919120  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:45.418994  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:45.918339  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:46.418246  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:46.919033  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:47.419147  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:47.918627  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:48.418221  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:48.918831  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:49.418986  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:49.919110  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:50.418471  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:50.918650  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:51.418927  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:51.918423  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:52.418874  305211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:29:52.813289  305211 kubeadm.go:1067] duration metric: took 12.078247672s to wait for elevateKubeSystemPrivileges.
	I1107 17:29:52.813329  305211 kubeadm.go:398] StartCluster complete in 24.72499193s
	I1107 17:29:52.813353  305211 settings.go:142] acquiring lock: {Name:mkf2fcb572bcccc1ea1245a5056c977e3fcf9575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:52.813495  305211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:29:52.815225  305211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-44720/kubeconfig: {Name:mk626f4fda2bff4e217db2cf8a2887eea6970f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:29:53.334184  305211 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-171817" rescaled to 1
	I1107 17:29:53.334252  305211 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 17:29:53.336266  305211 out.go:177] * Verifying Kubernetes components...
	I1107 17:29:53.334302  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 17:29:53.334344  305211 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 17:29:53.334548  305211 config.go:180] Loaded profile config "calico-171817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:29:53.339887  305211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:29:53.339988  305211 addons.go:65] Setting storage-provisioner=true in profile "calico-171817"
	I1107 17:29:53.340004  305211 addons.go:65] Setting default-storageclass=true in profile "calico-171817"
	I1107 17:29:53.340024  305211 addons.go:227] Setting addon storage-provisioner=true in "calico-171817"
	I1107 17:29:53.340038  305211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-171817"
	W1107 17:29:53.340040  305211 addons.go:236] addon storage-provisioner should already be in state true
	I1107 17:29:53.340194  305211 host.go:66] Checking if "calico-171817" exists ...
	I1107 17:29:53.340465  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:53.340691  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:53.372847  305211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:29:53.374381  305211 addons.go:227] Setting addon default-storageclass=true in "calico-171817"
	W1107 17:29:53.375020  305211 addons.go:236] addon default-storageclass should already be in state true
	I1107 17:29:53.375051  305211 host.go:66] Checking if "calico-171817" exists ...
	I1107 17:29:53.375000  305211 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:29:53.375122  305211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 17:29:53.375174  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:53.375377  305211 cli_runner.go:164] Run: docker container inspect calico-171817 --format={{.State.Status}}
	I1107 17:29:53.399876  305211 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 17:29:53.399903  305211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 17:29:53.399955  305211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171817
	I1107 17:29:53.400218  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:53.442752  305211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/calico-171817/id_rsa Username:docker}
	I1107 17:29:53.458782  305211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 17:29:53.459807  305211 node_ready.go:35] waiting up to 5m0s for node "calico-171817" to be "Ready" ...
	I1107 17:29:53.462975  305211 node_ready.go:49] node "calico-171817" has status "Ready":"True"
	I1107 17:29:53.462997  305211 node_ready.go:38] duration metric: took 3.162516ms waiting for node "calico-171817" to be "Ready" ...
	I1107 17:29:53.463004  305211 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:29:53.471424  305211 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace to be "Ready" ...
	I1107 17:29:53.517861  305211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:29:53.622217  305211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 17:29:54.822898  305211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.36407245s)
	I1107 17:29:54.822935  305211 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1107 17:29:54.904720  305211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.282458325s)
	I1107 17:29:54.904780  305211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.386891283s)
	I1107 17:29:54.906684  305211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 17:29:54.908027  305211 addons.go:488] enableAddons completed in 1.573713311s
	I1107 17:29:55.508759  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:29:57.521230  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:00.009560  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:02.010419  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:04.508138  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:06.509387  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:09.010338  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:11.508813  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:13.509618  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:16.008951  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:18.508758  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:20.509627  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:23.008348  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:25.509921  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:28.008842  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:30.009793  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:32.509254  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:35.008890  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:37.508780  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:40.010729  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:42.508678  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:44.509191  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:46.510360  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:49.008234  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:51.008589  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:53.009147  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:55.508668  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:30:58.008834  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:00.508817  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:03.010575  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:05.508259  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:07.508575  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:09.509291  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:12.008643  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:14.009166  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:16.009248  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:18.053607  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:20.111554  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:22.510090  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:25.008783  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:27.008829  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:29.509130  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:31.511304  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:34.014467  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:36.508736  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:39.008498  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:41.008846  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:43.508764  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:46.008576  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:48.009971  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:50.507851  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:52.508621  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:54.508866  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:57.009144  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:31:59.508441  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:01.508595  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:03.508803  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:05.509222  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:08.009082  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:10.009383  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:12.508078  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:14.508924  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:17.008602  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:19.508958  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:22.008360  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:24.009049  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:26.009111  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:28.508501  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:30.508622  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:32.508672  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:35.008968  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:37.508520  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:40.009569  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:42.009877  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:44.509769  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:47.009217  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:49.509015  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:51.509668  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:54.008936  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:56.508108  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:32:58.508381  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:00.508442  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:02.508604  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:04.508814  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:07.008724  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:09.010986  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:11.508793  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:14.008866  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:16.009911  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:18.509603  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:21.009885  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:23.508679  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:25.508797  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:28.008604  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:30.508565  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:33.008274  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:35.008426  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:37.009322  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:39.508367  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:41.508536  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:44.008482  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:46.508398  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:49.008526  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:51.008910  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:53.508389  305211 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:53.512712  305211 pod_ready.go:81] duration metric: took 4m0.041253974s waiting for pod "calico-kube-controllers-7df895d496-ftv44" in "kube-system" namespace to be "Ready" ...
	E1107 17:33:53.512735  305211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 17:33:53.512744  305211 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-7kwnv" in "kube-system" namespace to be "Ready" ...
	I1107 17:33:55.523904  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:33:58.024023  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:00.024106  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:02.523535  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:05.024645  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:07.524028  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:10.024109  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:12.024530  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:14.524243  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:17.024116  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:19.024439  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:21.523425  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:23.523932  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:26.023474  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:28.024359  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:30.524202  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:33.024094  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:35.024686  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:37.523790  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:39.523899  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:42.024124  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:44.523984  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:47.023518  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:49.523934  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:51.524358  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:54.023366  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:56.024904  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:34:58.525297  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:01.023596  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:03.024226  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:05.523891  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:07.524824  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:10.024567  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:12.523302  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:15.026061  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:17.524032  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:19.524631  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:22.024145  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:24.524161  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:26.524284  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:29.024537  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:31.524486  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:33.524620  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:36.023629  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:38.024383  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:40.524295  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:43.025677  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:45.524253  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:48.024017  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:50.024726  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:52.523739  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:54.524426  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:56.524802  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:35:59.024676  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:01.523927  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:03.524265  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:06.023563  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:08.023922  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:10.024311  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:12.024733  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:14.523800  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:17.024270  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:19.523601  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:22.023854  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:24.024093  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:26.024515  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:28.524108  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:30.525661  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:33.024147  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:35.024334  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:37.024499  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:39.524539  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:42.023600  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:44.023736  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:46.024334  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:48.523764  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:51.023611  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:53.024430  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:55.523849  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:36:57.524130  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:00.024039  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:02.024136  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:04.524298  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:07.025005  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:09.525389  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:12.024078  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:14.024565  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:16.524259  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:19.023230  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:21.027346  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:23.523685  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:26.023869  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:28.024016  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:30.524261  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:33.024849  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:35.523066  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:37.523765  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:39.524215  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:42.024430  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:44.027018  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:46.523883  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:48.524067  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:50.524730  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:53.024427  305211 pod_ready.go:102] pod "calico-node-7kwnv" in "kube-system" namespace has status "Ready":"False"
	I1107 17:37:53.528600  305211 pod_ready.go:81] duration metric: took 4m0.015842263s waiting for pod "calico-node-7kwnv" in "kube-system" namespace to be "Ready" ...
	E1107 17:37:53.528624  305211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 17:37:53.528636  305211 pod_ready.go:38] duration metric: took 8m0.065623347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:37:53.531045  305211 out.go:177] 
	W1107 17:37:53.532618  305211 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1107 17:37:53.532640  305211 out.go:239] * 
	* 
	W1107 17:37:53.533464  305211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:37:53.534701  305211 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (516.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (359.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:32:04.641131   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:32:05.660830   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139536125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:32:25.856322   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130307085s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:32:46.622136   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129264217s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:32:54.187960   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170221407s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:33:09.018347   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127609464s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:33:40.544686   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.550040   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.560287   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.580522   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.620802   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.701124   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:40.861385   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:41.181950   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:41.822804   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:43.103582   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140917109s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:33:45.664662   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:33:50.785674   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:34:01.026493   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:34:08.542966   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129653279s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:34:21.507621   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:34:22.807915   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129244618s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:34:41.323276   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.328574   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.338844   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.359087   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.399291   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.479600   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.639958   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:41.960564   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:42.601058   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:43.881921   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:46.442959   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:34:51.563670   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138552311s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:35:52.859500   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120933656s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:36:05.858184   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130390233s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:37:04.640723   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162634543s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (359.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (351.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137347547s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:35:22.284672   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:35:25.176349   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137690055s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131237429s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:35:55.617321   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.622601   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.632852   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.653136   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.693429   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.774304   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:55.934888   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:35:56.255572   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:56.896438   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:35:58.176862   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:36:00.737420   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:36:03.245278   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128360505s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:36:16.098490   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:36:24.388886   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:36:24.700629   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124196242s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:36:36.578978   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126933656s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:36:52.383466   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133899916s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:37:17.539535   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:37:25.165429   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133495141s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135858092s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:38:39.460887   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
E1107 17:38:40.545087   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125493387s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:39:08.229142   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
E1107 17:39:22.807735   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 17:39:41.322385   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131822127s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:40:09.006356   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:40:25.176091   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:40:55.616578   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/cilium-171817/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171815 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122640825s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (351.32s)

                                                
                                    

Test pass (249/277)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.58
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 6.4
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
18 TestDownloadOnlyKic 2.96
19 TestBinaryMirror 0.81
20 TestOffline 77.11
22 TestAddons/Setup 130.9
24 TestAddons/parallel/Registry 16.86
25 TestAddons/parallel/Ingress 22.01
26 TestAddons/parallel/MetricsServer 5.52
27 TestAddons/parallel/HelmTiller 15.35
29 TestAddons/parallel/CSI 41.73
30 TestAddons/parallel/Headlamp 9.78
31 TestAddons/parallel/CloudSpanner 5.34
33 TestAddons/serial/GCPAuth 35.6
34 TestAddons/StoppedEnableDisable 20.25
35 TestCertOptions 40.24
36 TestCertExpiration 231.75
38 TestForceSystemdFlag 26.93
39 TestForceSystemdEnv 35.37
40 TestKVMDriverInstallOrUpdate 10.28
44 TestErrorSpam/setup 25.65
45 TestErrorSpam/start 0.96
46 TestErrorSpam/status 1.08
47 TestErrorSpam/pause 1.56
48 TestErrorSpam/unpause 1.61
49 TestErrorSpam/stop 1.49
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 44.25
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.44
56 TestFunctional/serial/KubeContext 0.05
57 TestFunctional/serial/KubectlGetPods 0.07
60 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
61 TestFunctional/serial/CacheCmd/cache/add_local 1.92
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
69 TestFunctional/serial/ExtraConfig 38.16
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 1.12
72 TestFunctional/serial/LogsFileCmd 1.16
74 TestFunctional/parallel/ConfigCmd 0.57
75 TestFunctional/parallel/DashboardCmd 13.37
76 TestFunctional/parallel/DryRun 0.57
77 TestFunctional/parallel/InternationalLanguage 0.24
78 TestFunctional/parallel/StatusCmd 1.12
81 TestFunctional/parallel/ServiceCmd 22.02
82 TestFunctional/parallel/ServiceCmdConnect 6.73
83 TestFunctional/parallel/AddonsCmd 0.23
84 TestFunctional/parallel/PersistentVolumeClaim 27.42
86 TestFunctional/parallel/SSHCmd 0.66
87 TestFunctional/parallel/CpCmd 1.65
88 TestFunctional/parallel/MySQL 22.83
89 TestFunctional/parallel/FileSync 0.39
90 TestFunctional/parallel/CertSync 2.39
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
98 TestFunctional/parallel/License 0.17
99 TestFunctional/parallel/Version/short 0.07
100 TestFunctional/parallel/Version/components 0.58
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
107 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
108 TestFunctional/parallel/ImageCommands/ImageBuild 2.29
109 TestFunctional/parallel/ImageCommands/Setup 0.93
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.09
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.22
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.07
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.25
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.31
118 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.37
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.13
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ProfileCmd/profile_list 0.47
129 TestFunctional/parallel/MountCmd/any-port 7.57
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
131 TestFunctional/parallel/MountCmd/specific-port 2.3
132 TestFunctional/delete_addon-resizer_images 0.08
133 TestFunctional/delete_my-image_image 0.02
134 TestFunctional/delete_minikube_cached_images 0.02
137 TestIngressAddonLegacy/StartLegacyK8sCluster 73.2
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.74
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.37
141 TestIngressAddonLegacy/serial/ValidateIngressAddons 32.93
144 TestJSONOutput/start/Command 46.04
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.67
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.61
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 5.82
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.26
169 TestKicCustomNetwork/create_custom_network 37.84
170 TestKicCustomNetwork/use_default_bridge_network 28.07
171 TestKicExistingNetwork 27.71
172 TestKicCustomSubnet 27.76
173 TestMainNoArgs 0.07
174 TestMinikubeProfile 63.18
177 TestMountStart/serial/StartWithMountFirst 4.67
178 TestMountStart/serial/VerifyMountFirst 0.32
179 TestMountStart/serial/StartWithMountSecond 4.76
180 TestMountStart/serial/VerifyMountSecond 0.33
181 TestMountStart/serial/DeleteFirst 1.7
182 TestMountStart/serial/VerifyMountPostDelete 0.32
183 TestMountStart/serial/Stop 1.23
184 TestMountStart/serial/RestartStopped 6.29
185 TestMountStart/serial/VerifyMountPostStop 0.31
188 TestMultiNode/serial/FreshStart2Nodes 79.85
189 TestMultiNode/serial/DeployApp2Nodes 3.69
190 TestMultiNode/serial/PingHostFrom2Pods 0.89
191 TestMultiNode/serial/AddNode 28.38
192 TestMultiNode/serial/ProfileList 0.35
193 TestMultiNode/serial/CopyFile 11.42
194 TestMultiNode/serial/StopNode 2.33
195 TestMultiNode/serial/StartAfterStop 31.06
196 TestMultiNode/serial/RestartKeepsNodes 154.45
197 TestMultiNode/serial/DeleteNode 4.88
198 TestMultiNode/serial/StopMultiNode 40.03
199 TestMultiNode/serial/RestartMultiNode 106.85
200 TestMultiNode/serial/ValidateNameConflict 24.46
207 TestScheduledStopUnix 113.05
210 TestInsufficientStorage 15.22
211 TestRunningBinaryUpgrade 77.67
214 TestMissingContainerUpgrade 145.9
221 TestStoppedBinaryUpgrade/Setup 0.43
222 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
226 TestPause/serial/Start 59.32
227 TestNoKubernetes/serial/StartWithK8s 38.21
228 TestStoppedBinaryUpgrade/Upgrade 111.84
229 TestNoKubernetes/serial/StartWithStopK8s 16.44
230 TestNoKubernetes/serial/Start 3.9
231 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
232 TestNoKubernetes/serial/ProfileList 1.65
233 TestPause/serial/SecondStartNoReconfiguration 16.12
234 TestNoKubernetes/serial/Stop 1.27
235 TestNoKubernetes/serial/StartNoArgs 5.37
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
237 TestPause/serial/Pause 1.07
238 TestPause/serial/VerifyStatus 0.51
239 TestPause/serial/Unpause 0.78
240 TestPause/serial/PauseAgain 0.95
241 TestPause/serial/DeletePaused 6.52
242 TestPause/serial/VerifyDeletedResources 0.61
243 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
251 TestNetworkPlugins/group/false 0.61
256 TestStartStop/group/old-k8s-version/serial/FirstStart 123.5
258 TestStartStop/group/no-preload/serial/FirstStart 49.51
259 TestStartStop/group/no-preload/serial/DeployApp 8.32
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
261 TestStartStop/group/no-preload/serial/Stop 20.02
262 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
263 TestStartStop/group/no-preload/serial/SecondStart 311.95
264 TestStartStop/group/old-k8s-version/serial/DeployApp 7.38
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.59
266 TestStartStop/group/old-k8s-version/serial/Stop 20.04
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
268 TestStartStop/group/old-k8s-version/serial/SecondStart 420.8
270 TestStartStop/group/embed-certs/serial/FirstStart 44.66
271 TestStartStop/group/embed-certs/serial/DeployApp 7.31
272 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.73
273 TestStartStop/group/embed-certs/serial/Stop 20.05
274 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
275 TestStartStop/group/embed-certs/serial/SecondStart 313.34
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
279 TestStartStop/group/no-preload/serial/Pause 2.99
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.89
283 TestStartStop/group/newest-cni/serial/FirstStart 36.22
284 TestStartStop/group/newest-cni/serial/DeployApp 0
285 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.56
286 TestStartStop/group/newest-cni/serial/Stop 1.32
287 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
288 TestStartStop/group/newest-cni/serial/SecondStart 30.38
289 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.71
291 TestStartStop/group/default-k8s-diff-port/serial/Stop 24.04
292 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
293 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
294 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
295 TestStartStop/group/newest-cni/serial/Pause 3.01
296 TestNetworkPlugins/group/auto/Start 45.52
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 570.51
299 TestNetworkPlugins/group/auto/KubeletFlags 0.42
300 TestNetworkPlugins/group/auto/NetCatPod 10.3
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.05
302 TestNetworkPlugins/group/auto/DNS 0.14
303 TestNetworkPlugins/group/auto/Localhost 0.13
304 TestNetworkPlugins/group/auto/HairPin 0.13
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
306 TestNetworkPlugins/group/kindnet/Start 47.61
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
310 TestStartStop/group/old-k8s-version/serial/Pause 3.21
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
312 TestStartStop/group/embed-certs/serial/Pause 3.74
313 TestNetworkPlugins/group/cilium/Start 105.56
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
316 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
317 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
318 TestNetworkPlugins/group/kindnet/DNS 0.16
319 TestNetworkPlugins/group/kindnet/Localhost 0.13
320 TestNetworkPlugins/group/kindnet/HairPin 0.15
321 TestNetworkPlugins/group/enable-default-cni/Start 296.5
322 TestNetworkPlugins/group/cilium/ControllerPod 5.02
323 TestNetworkPlugins/group/cilium/KubeletFlags 0.34
324 TestNetworkPlugins/group/cilium/NetCatPod 10.79
325 TestNetworkPlugins/group/cilium/DNS 0.13
326 TestNetworkPlugins/group/cilium/Localhost 0.12
327 TestNetworkPlugins/group/cilium/HairPin 0.12
328 TestNetworkPlugins/group/bridge/Start 36.24
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
330 TestNetworkPlugins/group/bridge/NetCatPod 8.24
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
x
+
TestDownloadOnly/v1.16.0/json-events (6.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-164525 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-164525 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.578220021s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-164525
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-164525: exit status 85 (87.559469ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164525 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164525        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:45:25
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:45:25.842108   51188 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:45:25.842339   51188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:25.842352   51188 out.go:309] Setting ErrFile to fd 2...
	I1107 16:45:25.842360   51188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:25.842469   51188 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	W1107 16:45:25.842616   51188 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15310-44720/.minikube/config/config.json: open /home/jenkins/minikube-integration/15310-44720/.minikube/config/config.json: no such file or directory
	I1107 16:45:25.843331   51188 out.go:303] Setting JSON to true
	I1107 16:45:25.844159   51188 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8879,"bootTime":1667830647,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:45:25.844226   51188 start.go:126] virtualization: kvm guest
	I1107 16:45:25.847176   51188 out.go:97] [download-only-164525] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 16:45:25.847284   51188 notify.go:220] Checking for updates...
	W1107 16:45:25.847290   51188 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 16:45:25.848895   51188 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:45:25.850503   51188 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:45:25.852202   51188 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 16:45:25.853822   51188 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 16:45:25.855532   51188 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 16:45:25.858225   51188 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 16:45:25.858612   51188 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:45:25.885440   51188 docker.go:137] docker version: linux-20.10.21
	I1107 16:45:25.885511   51188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:26.753159   51188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:25.904756664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:26.753275   51188 docker.go:254] overlay module found
	I1107 16:45:26.755372   51188 out.go:97] Using the docker driver based on user configuration
	I1107 16:45:26.755399   51188 start.go:282] selected driver: docker
	I1107 16:45:26.755412   51188 start.go:808] validating driver "docker" against <nil>
	I1107 16:45:26.755563   51188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:26.866987   51188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:26.773111525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:26.867110   51188 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 16:45:26.867585   51188 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I1107 16:45:26.867689   51188 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 16:45:26.870164   51188 out.go:169] Using Docker driver with root privileges
	I1107 16:45:26.871886   51188 cni.go:95] Creating CNI manager for ""
	I1107 16:45:26.871911   51188 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 16:45:26.871928   51188 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1107 16:45:26.871935   51188 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1107 16:45:26.871941   51188 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 16:45:26.871950   51188 start_flags.go:317] config:
	{Name:download-only-164525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-164525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:45:26.873727   51188 out.go:97] Starting control plane node download-only-164525 in cluster download-only-164525
	I1107 16:45:26.873755   51188 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 16:45:26.875192   51188 out.go:97] Pulling base image ...
	I1107 16:45:26.875220   51188 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1107 16:45:26.875338   51188 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 16:45:26.895105   51188 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:45:26.895544   51188 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 16:45:26.895657   51188 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:45:26.899093   51188 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1107 16:45:26.899123   51188 cache.go:57] Caching tarball of preloaded images
	I1107 16:45:26.899277   51188 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1107 16:45:26.901873   51188 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 16:45:26.901898   51188 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1107 16:45:26.937738   51188 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1107 16:45:30.262188   51188 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1107 16:45:30.262294   51188 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164525"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (6.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-164525 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-164525 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.396837969s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (6.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-164525
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-164525: exit status 85 (86.79806ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164525 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164525        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-164525 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164525        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:45:32
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:45:32.508574   51349 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:45:32.508688   51349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:32.508693   51349 out.go:309] Setting ErrFile to fd 2...
	I1107 16:45:32.508697   51349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:32.508806   51349 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	W1107 16:45:32.508919   51349 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15310-44720/.minikube/config/config.json: open /home/jenkins/minikube-integration/15310-44720/.minikube/config/config.json: no such file or directory
	I1107 16:45:32.509365   51349 out.go:303] Setting JSON to true
	I1107 16:45:32.510218   51349 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8886,"bootTime":1667830647,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:45:32.510277   51349 start.go:126] virtualization: kvm guest
	I1107 16:45:32.512698   51349 out.go:97] [download-only-164525] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 16:45:32.512818   51349 notify.go:220] Checking for updates...
	I1107 16:45:32.514538   51349 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:45:32.516439   51349 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:45:32.517987   51349 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 16:45:32.519658   51349 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 16:45:32.521131   51349 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 16:45:32.523955   51349 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 16:45:32.524374   51349 config.go:180] Loaded profile config "download-only-164525": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1107 16:45:32.524473   51349 start.go:716] api.Load failed for download-only-164525: filestore "download-only-164525": Docker machine "download-only-164525" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 16:45:32.524539   51349 driver.go:365] Setting default libvirt URI to qemu:///system
	W1107 16:45:32.524593   51349 start.go:716] api.Load failed for download-only-164525: filestore "download-only-164525": Docker machine "download-only-164525" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 16:45:32.548451   51349 docker.go:137] docker version: linux-20.10.21
	I1107 16:45:32.548554   51349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:32.642603   51349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:32.567207777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:32.642701   51349 docker.go:254] overlay module found
	I1107 16:45:32.644778   51349 out.go:97] Using the docker driver based on existing profile
	I1107 16:45:32.644793   51349 start.go:282] selected driver: docker
	I1107 16:45:32.644804   51349 start.go:808] validating driver "docker" against &{Name:download-only-164525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-164525 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:45:32.644940   51349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:32.737017   51349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:32.662369323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:32.737600   51349 cni.go:95] Creating CNI manager for ""
	I1107 16:45:32.737618   51349 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1107 16:45:32.737630   51349 start_flags.go:317] config:
	{Name:download-only-164525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-164525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:45:32.739790   51349 out.go:97] Starting control plane node download-only-164525 in cluster download-only-164525
	I1107 16:45:32.739824   51349 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1107 16:45:32.741384   51349 out.go:97] Pulling base image ...
	I1107 16:45:32.741429   51349 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 16:45:32.741513   51349 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 16:45:32.761222   51349 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:45:32.761484   51349 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 16:45:32.761505   51349 image.go:63] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1107 16:45:32.761509   51349 image.go:104] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1107 16:45:32.761528   51349 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	I1107 16:45:32.764905   51349 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1107 16:45:32.764932   51349 cache.go:57] Caching tarball of preloaded images
	I1107 16:45:32.765082   51349 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1107 16:45:32.767223   51349 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1107 16:45:32.767248   51349 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1107 16:45:32.792760   51349 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1107 16:45:37.046138   51349 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1107 16:45:37.046243   51349 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164525"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-164525
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-164539 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-164539 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.525468484s)
helpers_test.go:175: Cleaning up "download-docker-164539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-164539
--- PASS: TestDownloadOnlyKic (2.96s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-164542 --alsologtostderr --binary-mirror http://127.0.0.1:37613 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-164542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-164542
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (77.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-171544 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-171544 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m13.060620829s)
helpers_test.go:175: Cleaning up "offline-containerd-171544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-171544

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-171544: (4.046006219s)
--- PASS: TestOffline (77.11s)

                                                
                                    
x
+
TestAddons/Setup (130.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-164543 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-164543 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m10.896859783s)
--- PASS: TestAddons/Setup (130.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 10.375168ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-l6xxz" [0fadd8ea-798e-4ac6-8c4d-25a7f815155f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009096969s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-8mmlm" [079d3ed8-0d4a-42d5-ad27-425aab7ee69b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008311103s
addons_test.go:293: (dbg) Run:  kubectl --context addons-164543 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-164543 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-164543 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.05909896s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-164543 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-164543 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-164543 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [e4bb7454-0059-42d7-bfba-7b74baa98a8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [e4bb7454-0059-42d7-bfba-7b74baa98a8e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.108712696s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context addons-164543 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-164543 addons disable ingress-dns --alsologtostderr -v=1: (1.16884886s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-164543 addons disable ingress --alsologtostderr -v=1: (7.521787419s)
--- PASS: TestAddons/parallel/Ingress (22.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 2.551712ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-58bw2" [5f3e3a49-83a4-4c68-a20d-2baa1aadb78a] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008104523s
addons_test.go:368: (dbg) Run:  kubectl --context addons-164543 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 10.321513ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-696b5bfbb7-rjmvt" [1fd26a9b-78ba-4886-9f58-31d008d6dedb] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008852487s
addons_test.go:426: (dbg) Run:  kubectl --context addons-164543 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-164543 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.952247678s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 13.583251ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-164543 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164543 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-164543 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [5f4fa0cf-8c7d-4936-81c0-64658111870b] Pending
helpers_test.go:342: "task-pv-pod" [5f4fa0cf-8c7d-4936-81c0-64658111870b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [5f4fa0cf-8c7d-4936-81c0-64658111870b] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.053219983s
addons_test.go:537: (dbg) Run:  kubectl --context addons-164543 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164543 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2022/11/07 16:48:10 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164543 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:547: (dbg) Run:  kubectl --context addons-164543 delete pod task-pv-pod
addons_test.go:547: (dbg) Done: kubectl --context addons-164543 delete pod task-pv-pod: (1.967819973s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-164543 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-164543 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164543 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-164543 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [25160ea6-73c2-40c8-8adf-40fccea7bbee] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [25160ea6-73c2-40c8-8adf-40fccea7bbee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [25160ea6-73c2-40c8-8adf-40fccea7bbee] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.006687239s
addons_test.go:579: (dbg) Run:  kubectl --context addons-164543 delete pod task-pv-pod-restore
addons_test.go:579: (dbg) Done: kubectl --context addons-164543 delete pod task-pv-pod-restore: (1.15263606s)
addons_test.go:583: (dbg) Run:  kubectl --context addons-164543 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-164543 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-164543 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.849163254s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-164543 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-7jkgb" [4a0c52a2-e04f-41ce-a462-09fcca9efd5a] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-7jkgb" [4a0c52a2-e04f-41ce-a462-09fcca9efd5a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-7jkgb" [4a0c52a2-e04f-41ce-a462-09fcca9efd5a] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-5f4cf474d8-7jkgb" [4a0c52a2-e04f-41ce-a462-09fcca9efd5a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.054984676s
--- PASS: TestAddons/parallel/Headlamp (9.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-hd44j" [d6233f2c-2afa-43d4-93f1-d59be86d58d5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005603358s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-164543
--- PASS: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (35.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-164543 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-164543 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f45d2f4a-e55d-4cfb-bdaa-1e6257edc71d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f45d2f4a-e55d-4cfb-bdaa-1e6257edc71d] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006026375s
addons_test.go:625: (dbg) Run:  kubectl --context addons-164543 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-164543 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-164543 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-164543 addons disable gcp-auth --alsologtostderr -v=1: (6.115338544s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-164543 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-164543 addons enable gcp-auth: (2.141153541s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-164543 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-dpnmv" [c7916a27-81fb-4259-ab5a-7aecde95401b] Pending
helpers_test.go:342: "private-image-5c86c669bd-dpnmv" [c7916a27-81fb-4259-ab5a-7aecde95401b] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-dpnmv" [c7916a27-81fb-4259-ab5a-7aecde95401b] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 9.006602466s
addons_test.go:723: (dbg) Run:  kubectl --context addons-164543 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-k6rvt" [2c88cc16-c1b3-4567-893e-a45dfddf58a7] Pending
helpers_test.go:342: "private-image-eu-64c96f687b-k6rvt" [2c88cc16-c1b3-4567-893e-a45dfddf58a7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-k6rvt" [2c88cc16-c1b3-4567-893e-a45dfddf58a7] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 9.006293638s
--- PASS: TestAddons/serial/GCPAuth (35.60s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-164543
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-164543: (20.046344414s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164543
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164543
--- PASS: TestAddons/StoppedEnableDisable (20.25s)

                                                
                                    
x
+
TestCertOptions (40.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-171828 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-171828 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.763535045s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-171828 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171828 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-171828 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-171828
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-171828: (2.66806508s)
--- PASS: TestCertOptions (40.24s)

                                                
                                    
x
+
TestCertExpiration (231.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171827 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171827 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.76837783s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171827 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1107 17:22:04.641201   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171827 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.654856065s)
helpers_test.go:175: Cleaning up "cert-expiration-171827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-171827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-171827: (2.321507319s)
--- PASS: TestCertExpiration (231.75s)

                                                
                                    
x
+
TestForceSystemdFlag (26.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-171908 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-171908 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.534195546s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-171908 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-171908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-171908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-171908: (2.046866957s)
--- PASS: TestForceSystemdFlag (26.93s)

                                                
                                    
x
+
TestForceSystemdEnv (35.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-171740 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-171740 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.107000582s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-171740 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-171740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-171740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-171740: (4.631996248s)
--- PASS: TestForceSystemdEnv (35.37s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (10.28s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (10.28s)

                                                
                                    
x
+
TestErrorSpam/setup (25.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-164937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164937 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-164937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164937 --driver=docker  --container-runtime=containerd: (25.649936168s)
--- PASS: TestErrorSpam/setup (25.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 start --dry-run
--- PASS: TestErrorSpam/start (0.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 stop: (1.241101855s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164937 --log_dir /tmp/nospam-164937 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/test/nested/copy/51176/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-165015 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (44.252964693s)
--- PASS: TestFunctional/serial/StartWithProxy (44.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-165015 --alsologtostderr -v=8: (15.436263946s)
functional_test.go:656: soft start took 15.436949573s for "functional-165015" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-165015 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 cache add k8s.gcr.io/pause:3.3: (1.090592439s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 cache add k8s.gcr.io/pause:latest: (1.040355115s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-165015 /tmp/TestFunctionalserialCacheCmdcacheadd_local867229838/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache add minikube-local-cache-test:functional-165015
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 cache add minikube-local-cache-test:functional-165015: (1.671707087s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache delete minikube-local-cache-test:functional-165015
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-165015
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (334.63613ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 cache reload: (1.087828889s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 kubectl -- --context functional-165015 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-165015 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-165015 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.16439573s)
functional_test.go:754: restart took 38.164520077s for "functional-165015" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-165015 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 logs: (1.11605083s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 logs --file /tmp/TestFunctionalserialLogsFileCmd1508359545/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 logs --file /tmp/TestFunctionalserialLogsFileCmd1508359545/001/logs.txt: (1.158387379s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 config get cpus: exit status 14 (91.254152ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 config get cpus: exit status 14 (84.557079ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165015 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165015 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 87977: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (230.220814ms)

                                                
                                                
-- stdout --
	* [functional-165015] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:52:29.945272   87012 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:52:29.945391   87012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:52:29.945402   87012 out.go:309] Setting ErrFile to fd 2...
	I1107 16:52:29.945407   87012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:52:29.945514   87012 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 16:52:29.946089   87012 out.go:303] Setting JSON to false
	I1107 16:52:29.947404   87012 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9303,"bootTime":1667830647,"procs":516,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:52:29.947486   87012 start.go:126] virtualization: kvm guest
	I1107 16:52:29.950174   87012 out.go:177] * [functional-165015] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 16:52:29.951844   87012 notify.go:220] Checking for updates...
	I1107 16:52:29.953380   87012 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 16:52:29.955143   87012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:52:29.956980   87012 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 16:52:29.958685   87012 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 16:52:29.960487   87012 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 16:52:29.962526   87012 config.go:180] Loaded profile config "functional-165015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 16:52:29.962951   87012 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:52:29.991259   87012 docker.go:137] docker version: linux-20.10.21
	I1107 16:52:29.991382   87012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:52:30.090562   87012 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 16:52:30.012045294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:52:30.090700   87012 docker.go:254] overlay module found
	I1107 16:52:30.092979   87012 out.go:177] * Using the docker driver based on existing profile
	I1107 16:52:30.094474   87012 start.go:282] selected driver: docker
	I1107 16:52:30.094496   87012 start.go:808] validating driver "docker" against &{Name:functional-165015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165015 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:f
alse registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:52:30.094624   87012 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:52:30.097118   87012 out.go:177] 
	W1107 16:52:30.098696   87012 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 16:52:30.100136   87012 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (242.47787ms)

                                                
                                                
-- stdout --
	* [functional-165015] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:52:30.517630   87422 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:52:30.517790   87422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:52:30.517806   87422 out.go:309] Setting ErrFile to fd 2...
	I1107 16:52:30.517815   87422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:52:30.517999   87422 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 16:52:30.518642   87422 out.go:303] Setting JSON to false
	I1107 16:52:30.519850   87422 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9304,"bootTime":1667830647,"procs":514,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:52:30.519922   87422 start.go:126] virtualization: kvm guest
	I1107 16:52:30.522542   87422 out.go:177] * [functional-165015] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I1107 16:52:30.524085   87422 notify.go:220] Checking for updates...
	I1107 16:52:30.524093   87422 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 16:52:30.525745   87422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:52:30.527456   87422 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 16:52:30.529126   87422 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 16:52:30.530686   87422 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 16:52:30.532660   87422 config.go:180] Loaded profile config "functional-165015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 16:52:30.533273   87422 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:52:30.565581   87422 docker.go:137] docker version: linux-20.10.21
	I1107 16:52:30.565671   87422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:52:30.668262   87422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 16:52:30.585603463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:52:30.668381   87422 docker.go:254] overlay module found
	I1107 16:52:30.670609   87422 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 16:52:30.671898   87422 start.go:282] selected driver: docker
	I1107 16:52:30.671925   87422 start.go:808] validating driver "docker" against &{Name:functional-165015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165015 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:f
alse registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:52:30.672078   87422 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:52:30.674734   87422 out.go:177] 
	W1107 16:52:30.676410   87422 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 16:52:30.678150   87422 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (22.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-165015 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-165015 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-kdfmb" [eea301b0-4af5-4fa8-bce8-c18efac9871e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-kdfmb" [eea301b0-4af5-4fa8-bce8-c18efac9871e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 20.007066715s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:31576
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:31576
--- PASS: TestFunctional/parallel/ServiceCmd (22.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-165015 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1565: (dbg) Run:  kubectl --context functional-165015 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-55j56" [162d8efd-76eb-449a-9872-d2f56b37686f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-55j56" [162d8efd-76eb-449a-9872-d2f56b37686f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.006792063s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:32031
functional_test.go:1605: http://192.168.49.2:32031: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-55j56

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32031
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [896db263-f411-405b-99b5-495d78971eab] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007838822s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-165015 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-165015 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-165015 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165015 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [6d596d4c-fd75-4ccf-a283-0317843d35dd] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6d596d4c-fd75-4ccf-a283-0317843d35dd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6d596d4c-fd75-4ccf-a283-0317843d35dd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007857088s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-165015 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-165015 delete -f testdata/storage-provisioner/pod.yaml
2022/11/07 16:52:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-165015 delete -f testdata/storage-provisioner/pod.yaml: (1.521201012s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165015 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f377c3f1-cdac-43b3-895c-0ffef54c2388] Pending
helpers_test.go:342: "sp-pod" [f377c3f1-cdac-43b3-895c-0ffef54c2388] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [f377c3f1-cdac-43b3-895c-0ffef54c2388] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005680292s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-165015 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh -n functional-165015 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 cp functional-165015:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3884268299/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh -n functional-165015 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-165015 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-gspbv" [c0b649e1-bc48-438d-a4ba-db8a1d2abe11] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-gspbv" [c0b649e1-bc48-438d-a4ba-db8a1d2abe11] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-gspbv" [c0b649e1-bc48-438d-a4ba-db8a1d2abe11] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.062471682s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;": exit status 1 (232.412699ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;": exit status 1 (297.281401ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;": exit status 1 (179.039779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165015 exec mysql-596b7fcdbf-gspbv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/51176/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /etc/test/nested/copy/51176/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51176.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /etc/ssl/certs/51176.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/51176.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /usr/share/ca-certificates/51176.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/511762.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /etc/ssl/certs/511762.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/511762.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /usr/share/ca-certificates/511762.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-165015 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh "sudo systemctl is-active docker": exit status 1 (430.404057ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh "sudo systemctl is-active crio": exit status 1 (413.922491ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165015 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-165015
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-165015
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165015 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| docker.io/library/minikube-local-cache-test | functional-165015  | sha256:dc2e0a | 1.74kB |
| docker.io/library/nginx                     | alpine             | sha256:b99730 | 10.2MB |
| gcr.io/google-containers/addon-resizer      | functional-165015  | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| docker.io/library/mysql                     | 5.7                | sha256:eef0fa | 144MB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165015 image ls --format json:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:dc2e0a77a64868b641280b4a4bd3d416caa5300c378f146fae4839b535e5c142","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-165015"],"size":"1737"},{"id":"sha256:0184c1613d
92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","r
epoDigests":["docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10243852"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-165015"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"
20265805"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223","repoDigests":["docker.io/library/mysql@sha256:0e3435e72c493aec752d8274379b1eac4d634f47a7781a7a92b8636fa1dc94c1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"144296832"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:0346dbd74bcb9485bb4da1b330270
94d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165015 image ls --format yaml:
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223
repoDigests:
- docker.io/library/mysql@sha256:0e3435e72c493aec752d8274379b1eac4d634f47a7781a7a92b8636fa1dc94c1
repoTags:
- docker.io/library/mysql:5.7
size: "144296832"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:dc2e0a77a64868b641280b4a4bd3d416caa5300c378f146fae4839b535e5c142
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-165015
size: "1737"
- id: sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests:
- docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3
repoTags:
- docker.io/library/nginx:alpine
size: "10243852"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-165015
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh pgrep buildkitd: exit status 1 (386.452064ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image build -t localhost/my-image:functional-165015 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image build -t localhost/my-image:functional-165015 testdata/build: (1.633912751s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165015 image build -t localhost/my-image:functional-165015 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.4s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:04eb9915399043bb58cadbd7657959528ad94b3ce288b6694e7bdaf196b80f4e done
#8 exporting config sha256:caef6ffa7408252257a6abad8a346166443777e152bf0c0439bbfe53157b873f done
#8 naming to localhost/my-image:functional-165015 done
#8 DONE 0.2s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-165015
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015: (4.81058449s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-165015 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-165015 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [486deea9-0de8-4db8-bf77-4286a42d719f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [486deea9-0de8-4db8-bf77-4286a42d719f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.006616626s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015: (4.822039437s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-165015
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image load --daemon gcr.io/google-containers/addon-resizer:functional-165015: (5.208064767s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image save gcr.io/google-containers/addon-resizer:functional-165015 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image save gcr.io/google-containers/addon-resizer:functional-165015 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.314815081s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image rm gcr.io/google-containers/addon-resizer:functional-165015
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.13449546s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-165015
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 image save --daemon gcr.io/google-containers/addon-resizer:functional-165015

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-165015 image save --daemon gcr.io/google-containers/addon-resizer:functional-165015: (1.076554959s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-165015
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-165015 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.101.69.247 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-165015 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "394.76409ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "75.209335ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165015 /tmp/TestFunctionalparallelMountCmdany-port984068878/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667839949056934886" to /tmp/TestFunctionalparallelMountCmdany-port984068878/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667839949056934886" to /tmp/TestFunctionalparallelMountCmdany-port984068878/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667839949056934886" to /tmp/TestFunctionalparallelMountCmdany-port984068878/001/test-1667839949056934886
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.563772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 test-1667839949056934886
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh cat /mount-9p/test-1667839949056934886

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-165015 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [caa363ea-1f83-45f0-be34-ee2d78bb1b1a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [caa363ea-1f83-45f0-be34-ee2d78bb1b1a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [caa363ea-1f83-45f0-be34-ee2d78bb1b1a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [caa363ea-1f83-45f0-be34-ee2d78bb1b1a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007472246s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-165015 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165015 /tmp/TestFunctionalparallelMountCmdany-port984068878/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "346.450953ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "83.119249ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165015 /tmp/TestFunctionalparallelMountCmdspecific-port1630471093/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.191249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165015 /tmp/TestFunctionalparallelMountCmdspecific-port1630471093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-165015 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165015 ssh "sudo umount -f /mount-9p": exit status 1 (431.880532ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-165015 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165015 /tmp/TestFunctionalparallelMountCmdspecific-port1630471093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-165015
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-165015
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-165015
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-165256 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1107 16:52:56.749408   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 16:52:59.310553   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 16:53:04.430820   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 16:53:14.671803   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 16:53:35.152802   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-165256 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m13.197123718s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons enable ingress --alsologtostderr -v=5
E1107 16:54:16.113842   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons enable ingress --alsologtostderr -v=5: (12.738362268s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (32.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-165256 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-165256 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.559538582s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-165256 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-165256 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [0170928a-05f3-4a1f-993d-57c5d500f7b2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [0170928a-05f3-4a1f-993d-57c5d500f7b2] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.005905672s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-165256 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons disable ingress-dns --alsologtostderr -v=1: (3.883775298s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165256 addons disable ingress --alsologtostderr -v=1: (7.27731547s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (32.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-165458 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1107 16:55:38.034368   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-165458 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.034793916s)
--- PASS: TestJSONOutput/start/Command (46.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-165458 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-165458 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-165458 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-165458 --output=json --user=testUser: (5.815073967s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-165556 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-165556 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.706532ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa2165e0-7627-42ce-aa13-f2e2ebdace0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-165556] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e46fca6b-18f4-44fc-b420-36b1fc604ed8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"36e47ac2-e4f5-403f-a170-0aca78decd8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1bd6da6b-09ff-4d9c-94f7-42f59a8d4697","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig"}}
	{"specversion":"1.0","id":"9f479996-d9b2-4ccc-afb0-4615b30c8776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube"}}
	{"specversion":"1.0","id":"ee2fb7d5-37f3-49e6-a9ae-07ca32f3dfbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"21ac4092-0042-4a0b-8cf0-dc113f853c13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-165556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-165556
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-165556 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-165556 --network=: (35.682726385s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-165556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-165556
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-165556: (2.134582893s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-165634 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-165634 --network=bridge: (26.086162859s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-165634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-165634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-165634: (1.957322579s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.07s)

                                                
                                    
x
+
TestKicExistingNetwork (27.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-165702 --network=existing-network
E1107 16:57:04.640750   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.646059   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.656401   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.676716   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.717085   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.797501   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:04.957906   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:05.278508   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:05.919549   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:07.199832   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:09.760057   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:14.881255   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:25.121801   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-165702 --network=existing-network: (25.596983631s)
helpers_test.go:175: Cleaning up "existing-network-165702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-165702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-165702: (1.955470123s)
--- PASS: TestKicExistingNetwork (27.71s)

                                                
                                    
x
+
TestKicCustomSubnet (27.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-165730 --subnet=192.168.60.0/24
E1107 16:57:45.602720   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 16:57:54.187311   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-165730 --subnet=192.168.60.0/24: (25.56071456s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-165730 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-165730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-165730
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-165730: (2.177869447s)
--- PASS: TestKicCustomSubnet (27.76s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (63.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-165758 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-165758 --driver=docker  --container-runtime=containerd: (23.437937788s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-165758 --driver=docker  --container-runtime=containerd
E1107 16:58:21.875212   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 16:58:26.563550   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-165758 --driver=docker  --container-runtime=containerd: (34.446941599s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-165758
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-165758
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-165758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-165758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-165758: (1.921272243s)
helpers_test.go:175: Cleaning up "first-165758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-165758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-165758: (2.160027048s)
--- PASS: TestMinikubeProfile (63.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-165901 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-165901 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.665498704s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165901 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165901 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165901 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.758956831s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165901 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-165901 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-165901 --alsologtostderr -v=5: (1.699627719s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165901 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-165901
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-165901: (1.229721893s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165901
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165901: (5.28731187s)
--- PASS: TestMountStart/serial/RestartStopped (6.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165901 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165923 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1107 16:59:23.446424   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:24.087514   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:25.368348   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:27.929571   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:33.050600   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:43.291221   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 16:59:48.484334   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:00:03.771612   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165923 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.317418451s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- rollout status deployment/busybox
E1107 17:00:44.731818   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-165923 -- rollout status deployment/busybox: (1.888465448s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-4nf7j -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-5l689 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-4nf7j -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-5l689 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-4nf7j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-5l689 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-4nf7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-4nf7j -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-5l689 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165923 -- exec busybox-65db55d5d6-5l689 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-165923 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-165923 -v 3 --alsologtostderr: (27.691369313s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.38s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp testdata/cp-test.txt multinode-165923:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1323681445/001/cp-test_multinode-165923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923:/home/docker/cp-test.txt multinode-165923-m02:/home/docker/cp-test_multinode-165923_multinode-165923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test_multinode-165923_multinode-165923-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923:/home/docker/cp-test.txt multinode-165923-m03:/home/docker/cp-test_multinode-165923_multinode-165923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test_multinode-165923_multinode-165923-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp testdata/cp-test.txt multinode-165923-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1323681445/001/cp-test_multinode-165923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m02:/home/docker/cp-test.txt multinode-165923:/home/docker/cp-test_multinode-165923-m02_multinode-165923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test_multinode-165923-m02_multinode-165923.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m02:/home/docker/cp-test.txt multinode-165923-m03:/home/docker/cp-test_multinode-165923-m02_multinode-165923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test_multinode-165923-m02_multinode-165923-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp testdata/cp-test.txt multinode-165923-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1323681445/001/cp-test_multinode-165923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt multinode-165923:/home/docker/cp-test_multinode-165923-m03_multinode-165923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923 "sudo cat /home/docker/cp-test_multinode-165923-m03_multinode-165923.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt multinode-165923-m02:/home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 ssh -n multinode-165923-m02 "sudo cat /home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-165923 node stop m03: (1.24757231s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165923 status: exit status 7 (542.790883ms)

                                                
                                                
-- stdout --
	multinode-165923
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-165923-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-165923-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr: exit status 7 (539.442155ms)

                                                
                                                
-- stdout --
	multinode-165923
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-165923-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-165923-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:01:29.671738  142813 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:01:29.671833  142813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:01:29.671841  142813 out.go:309] Setting ErrFile to fd 2...
	I1107 17:01:29.671846  142813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:01:29.671946  142813 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:01:29.672104  142813 out.go:303] Setting JSON to false
	I1107 17:01:29.672140  142813 mustload.go:65] Loading cluster: multinode-165923
	I1107 17:01:29.672223  142813 notify.go:220] Checking for updates...
	I1107 17:01:29.672497  142813 config.go:180] Loaded profile config "multinode-165923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:01:29.672517  142813 status.go:255] checking status of multinode-165923 ...
	I1107 17:01:29.672979  142813 cli_runner.go:164] Run: docker container inspect multinode-165923 --format={{.State.Status}}
	I1107 17:01:29.696516  142813 status.go:330] multinode-165923 host status = "Running" (err=<nil>)
	I1107 17:01:29.696540  142813 host.go:66] Checking if "multinode-165923" exists ...
	I1107 17:01:29.696739  142813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-165923
	I1107 17:01:29.717934  142813 host.go:66] Checking if "multinode-165923" exists ...
	I1107 17:01:29.718154  142813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:01:29.718193  142813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-165923
	I1107 17:01:29.739995  142813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/multinode-165923/id_rsa Username:docker}
	I1107 17:01:29.826723  142813 ssh_runner.go:195] Run: systemctl --version
	I1107 17:01:29.830106  142813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:01:29.838767  142813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:01:29.935100  142813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-11-07 17:01:29.85959574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:01:29.935642  142813 kubeconfig.go:92] found "multinode-165923" server: "https://192.168.58.2:8443"
	I1107 17:01:29.935670  142813 api_server.go:165] Checking apiserver status ...
	I1107 17:01:29.935698  142813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:01:29.944680  142813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	I1107 17:01:29.951824  142813 api_server.go:181] apiserver freezer: "8:freezer:/docker/85767b13eb47d7448f46a16b818c46aa0f932d634d967dd27337941a7dcf6c1d/kubepods/burstable/pod6db298e318cf64e52f66b6d8a59c824b/b2dac4dd9e3898c4985003c0eacfb6c66de68612cdcd18e27f53ba4b9963c624"
	I1107 17:01:29.951884  142813 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/85767b13eb47d7448f46a16b818c46aa0f932d634d967dd27337941a7dcf6c1d/kubepods/burstable/pod6db298e318cf64e52f66b6d8a59c824b/b2dac4dd9e3898c4985003c0eacfb6c66de68612cdcd18e27f53ba4b9963c624/freezer.state
	I1107 17:01:29.958037  142813 api_server.go:203] freezer state: "THAWED"
	I1107 17:01:29.958066  142813 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 17:01:29.962323  142813 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 17:01:29.962353  142813 status.go:421] multinode-165923 apiserver status = Running (err=<nil>)
	I1107 17:01:29.962365  142813 status.go:257] multinode-165923 status: &{Name:multinode-165923 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:01:29.962385  142813 status.go:255] checking status of multinode-165923-m02 ...
	I1107 17:01:29.962627  142813 cli_runner.go:164] Run: docker container inspect multinode-165923-m02 --format={{.State.Status}}
	I1107 17:01:29.984921  142813 status.go:330] multinode-165923-m02 host status = "Running" (err=<nil>)
	I1107 17:01:29.984946  142813 host.go:66] Checking if "multinode-165923-m02" exists ...
	I1107 17:01:29.985165  142813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-165923-m02
	I1107 17:01:30.006892  142813 host.go:66] Checking if "multinode-165923-m02" exists ...
	I1107 17:01:30.007124  142813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:01:30.007172  142813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-165923-m02
	I1107 17:01:30.029220  142813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/multinode-165923-m02/id_rsa Username:docker}
	I1107 17:01:30.110816  142813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:01:30.119424  142813 status.go:257] multinode-165923-m02 status: &{Name:multinode-165923-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:01:30.119465  142813 status.go:255] checking status of multinode-165923-m03 ...
	I1107 17:01:30.119759  142813 cli_runner.go:164] Run: docker container inspect multinode-165923-m03 --format={{.State.Status}}
	I1107 17:01:30.142158  142813 status.go:330] multinode-165923-m03 host status = "Stopped" (err=<nil>)
	I1107 17:01:30.142187  142813 status.go:343] host is not running, skipping remaining checks
	I1107 17:01:30.142195  142813 status.go:257] multinode-165923-m03 status: &{Name:multinode-165923-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-165923 node start m03 --alsologtostderr: (30.279573777s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (154.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165923
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-165923
E1107 17:02:04.640636   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:02:06.652914   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 17:02:32.326457   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-165923: (40.96421112s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165923 --wait=true -v=8 --alsologtostderr
E1107 17:02:54.187471   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 17:04:22.808728   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165923 --wait=true -v=8 --alsologtostderr: (1m53.342771686s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165923
--- PASS: TestMultiNode/serial/RestartKeepsNodes (154.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-165923 node delete m03: (4.225692687s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 stop
E1107 17:04:50.495553   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-165923 stop: (39.798411876s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165923 status: exit status 7 (111.672648ms)

                                                
                                                
-- stdout --
	multinode-165923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-165923-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr: exit status 7 (117.524259ms)

                                                
                                                
-- stdout --
	multinode-165923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-165923-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:05:20.501391  153736 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:05:20.501500  153736 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:05:20.501504  153736 out.go:309] Setting ErrFile to fd 2...
	I1107 17:05:20.501509  153736 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:05:20.501621  153736 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:05:20.501804  153736 out.go:303] Setting JSON to false
	I1107 17:05:20.501842  153736 mustload.go:65] Loading cluster: multinode-165923
	I1107 17:05:20.502173  153736 notify.go:220] Checking for updates...
	I1107 17:05:20.503210  153736 config.go:180] Loaded profile config "multinode-165923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:05:20.503281  153736 status.go:255] checking status of multinode-165923 ...
	I1107 17:05:20.504042  153736 cli_runner.go:164] Run: docker container inspect multinode-165923 --format={{.State.Status}}
	I1107 17:05:20.532480  153736 status.go:330] multinode-165923 host status = "Stopped" (err=<nil>)
	I1107 17:05:20.532506  153736 status.go:343] host is not running, skipping remaining checks
	I1107 17:05:20.532513  153736 status.go:257] multinode-165923 status: &{Name:multinode-165923 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:05:20.532580  153736 status.go:255] checking status of multinode-165923-m02 ...
	I1107 17:05:20.532873  153736 cli_runner.go:164] Run: docker container inspect multinode-165923-m02 --format={{.State.Status}}
	I1107 17:05:20.554561  153736 status.go:330] multinode-165923-m02 host status = "Stopped" (err=<nil>)
	I1107 17:05:20.554592  153736 status.go:343] host is not running, skipping remaining checks
	I1107 17:05:20.554599  153736 status.go:257] multinode-165923-m02 status: &{Name:multinode-165923-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165923 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1107 17:07:04.640708   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165923 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m46.16771242s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165923 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165923
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165923-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-165923-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.824338ms)

                                                
                                                
-- stdout --
	* [multinode-165923-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-165923-m02' is duplicated with machine name 'multinode-165923-m02' in profile 'multinode-165923'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165923-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165923-m03 --driver=docker  --container-runtime=containerd: (22.028266974s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-165923
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-165923: exit status 80 (342.921927ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-165923
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-165923-m03 already exists in multinode-165923-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-165923-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-165923-m03: (1.929673976s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.46s)

                                                
                                    
x
+
TestScheduledStopUnix (113.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-171336 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-171336 --memory=2048 --driver=docker  --container-runtime=containerd: (36.413119085s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171336 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-171336 -n scheduled-stop-171336
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171336 --cancel-scheduled
E1107 17:14:22.808041   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171336 -n scheduled-stop-171336
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-171336
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-171336
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-171336: exit status 7 (96.847417ms)

                                                
                                                
-- stdout --
	scheduled-stop-171336
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171336 -n scheduled-stop-171336
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171336 -n scheduled-stop-171336: exit status 7 (91.457936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-171336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-171336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-171336: (4.934375609s)
--- PASS: TestScheduledStopUnix (113.05s)

                                                
                                    
x
+
TestInsufficientStorage (15.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-171529 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-171529 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.682087026s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51db28eb-4e3d-4ded-9b9a-b5b13f4a3ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-171529] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd307efb-3f34-4b5c-868b-82235b93c527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"82bc1752-9697-43de-8123-4074e21a7575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a00ed2b3-1f1a-4aff-84a0-76289434209f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig"}}
	{"specversion":"1.0","id":"80b2459d-3697-4685-8256-a1af071a7907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube"}}
	{"specversion":"1.0","id":"bd885311-ddd9-4f38-93d0-b71c4cf674c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b175e68b-e86b-42a7-8889-b4d40a890e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"639712c4-d0ac-41c7-a2e3-59a1db285e3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0069ca4f-272c-4c54-b8b1-217c25cde8ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"774fd9bb-8645-46a9-a643-1e1632990728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4486157e-6718-4bf5-911f-e0670ca8f584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-171529 in cluster insufficient-storage-171529","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"abfa22a2-e09d-4964-aef8-f6c675c49ee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a5061a9-f64c-4178-8677-19c8b57d1289","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d125f40d-9b86-42e4-b75e-9193fb28826a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-171529 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-171529 --output=json --layout=cluster: exit status 7 (324.665117ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-171529","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-171529","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:15:38.114176  176844 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-171529" does not appear in /home/jenkins/minikube-integration/15310-44720/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-171529 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-171529 --output=json --layout=cluster: exit status 7 (328.91561ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-171529","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-171529","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:15:38.444483  176953 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-171529" does not appear in /home/jenkins/minikube-integration/15310-44720/kubeconfig
	E1107 17:15:38.452611  176953 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/insufficient-storage-171529/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-171529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-171529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-171529: (5.885522613s)
--- PASS: TestInsufficientStorage (15.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.832090218.exe start -p running-upgrade-171710 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.832090218.exe start -p running-upgrade-171710 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (34.91389096s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-171710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-171710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.177576481s)
helpers_test.go:175: Cleaning up "running-upgrade-171710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-171710

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-171710: (4.143669664s)
--- PASS: TestRunningBinaryUpgrade (77.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.2470541900.exe start -p missing-upgrade-171655 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.2470541900.exe start -p missing-upgrade-171655 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.259397291s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-171655

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-171655: (12.324126564s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-171655
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-171655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-171655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.596248399s)
helpers_test.go:175: Cleaning up "missing-upgrade-171655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-171655
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-171655: (2.242004019s)
--- PASS: TestMissingContainerUpgrade (145.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (119.017776ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-171544] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestPause/serial/Start (59.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171544 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-171544 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.314965201s)
--- PASS: TestPause/serial/Start (59.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171544 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171544 --driver=docker  --container-runtime=containerd: (37.767785359s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-171544 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.844098949.exe start -p stopped-upgrade-171544 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1107 17:15:45.855774   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.844098949.exe start -p stopped-upgrade-171544 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.457633314s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.844098949.exe -p stopped-upgrade-171544 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.844098949.exe -p stopped-upgrade-171544 stop: (1.256513232s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-171544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-171544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.129318521s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.752785865s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-171544 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-171544 status -o json: exit status 2 (372.888813ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-171544","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-171544
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-171544: (2.312763495s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (3.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171544 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.903444748s)
--- PASS: TestNoKubernetes/serial/Start (3.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-171544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-171544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.840958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171544 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-171544 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.102692703s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-171544
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-171544: (1.26684333s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171544 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171544 --driver=docker  --container-runtime=containerd: (5.37465074s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-171544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-171544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (320.664764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-171544 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-171544 --alsologtostderr -v=5: (1.064890651s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-171544 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-171544 --output=json --layout=cluster: exit status 2 (510.197946ms)

                                                
                                                
-- stdout --
	{"Name":"pause-171544","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-171544","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.51s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-171544 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-171544 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.52s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-171544 --alsologtostderr -v=5
E1107 17:17:04.640929   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-171544 --alsologtostderr -v=5: (6.516431465s)
--- PASS: TestPause/serial/DeletePaused (6.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-171544
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-171544: exit status 1 (31.45811ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-171544

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-171544
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-171816 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-171816 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (327.01402ms)

                                                
                                                
-- stdout --
	* [false-171816] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:18:16.288070  215360 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:18:16.288216  215360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:18:16.288231  215360 out.go:309] Setting ErrFile to fd 2...
	I1107 17:18:16.288240  215360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:18:16.288400  215360 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
	I1107 17:18:16.289178  215360 out.go:303] Setting JSON to false
	I1107 17:18:16.291434  215360 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10849,"bootTime":1667830647,"procs":830,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:18:16.291530  215360 start.go:126] virtualization: kvm guest
	I1107 17:18:16.294271  215360 out.go:177] * [false-171816] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:18:16.295858  215360 notify.go:220] Checking for updates...
	I1107 17:18:16.295879  215360 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:18:16.297593  215360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:18:16.299066  215360 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
	I1107 17:18:16.300601  215360 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
	I1107 17:18:16.302034  215360 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:18:16.304121  215360 config.go:180] Loaded profile config "kubernetes-upgrade-171701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1107 17:18:16.304275  215360 config.go:180] Loaded profile config "missing-upgrade-171655": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I1107 17:18:16.304426  215360 config.go:180] Loaded profile config "running-upgrade-171710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1107 17:18:16.304491  215360 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:18:16.342770  215360 docker.go:137] docker version: linux-20.10.21
	I1107 17:18:16.342890  215360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:18:16.499819  215360 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:94 SystemTime:2022-11-07 17:18:16.375241594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:18:16.499969  215360 docker.go:254] overlay module found
	I1107 17:18:16.501770  215360 out.go:177] * Using the docker driver based on user configuration
	I1107 17:18:16.503249  215360 start.go:282] selected driver: docker
	I1107 17:18:16.503281  215360 start.go:808] validating driver "docker" against <nil>
	I1107 17:18:16.503312  215360 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:18:16.505896  215360 out.go:177] 
	W1107 17:18:16.507405  215360 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1107 17:18:16.508791  215360 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-171816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-171816
--- PASS: TestNetworkPlugins/group/false (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171920 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1107 17:19:22.808329   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-171920 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m3.502339348s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (49.509339015s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171935 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [6611606c-68bd-4349-ac80-4f0071ff84b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [6611606c-68bd-4349-ac80-4f0071ff84b0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.01091044s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171935 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-171935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-171935 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-171935 --alsologtostderr -v=3: (20.022386306s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171935 -n no-preload-171935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171935 -n no-preload-171935: exit status 7 (97.805413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-171935 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (311.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m11.542140283s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171935 -n no-preload-171935
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (311.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-171920 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [a1806acb-50c3-4f7e-be00-8d6ed1a76390] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [a1806acb-50c3-4f7e-be00-8d6ed1a76390] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.011719515s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-171920 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-171920 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-171920 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-171920 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-171920 --alsologtostderr -v=3: (20.037854174s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171920 -n old-k8s-version-171920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171920 -n old-k8s-version-171920: exit status 7 (102.570418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-171920 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (420.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171920 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-171920 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m0.349439931s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171920 -n old-k8s-version-171920
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (420.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-172219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1107 17:22:54.187257   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-172219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (44.660131908s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-172219 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [7bae892f-13db-4f35-9e1c-f9d152b235a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [7bae892f-13db-4f35-9e1c-f9d152b235a9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.010405425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-172219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-172219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-172219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-172219 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-172219 --alsologtostderr -v=3: (20.052511136s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-172219 -n embed-certs-172219
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-172219 -n embed-certs-172219: exit status 7 (125.719819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-172219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-172219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1107 17:24:22.808579   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 17:25:57.237459   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-172219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m12.875591452s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-172219 -n embed-certs-172219
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-g2z4w" [527628e4-2950-4bcb-9d92-e6f63064d630] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-g2z4w" [527628e4-2950-4bcb-9d92-e6f63064d630] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.012159653s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-g2z4w" [527628e4-2950-4bcb-9d92-e6f63064d630] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006797465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-171935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-171935 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-171935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171935 -n no-preload-171935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171935 -n no-preload-171935: exit status 2 (363.88299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171935 -n no-preload-171935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171935 -n no-preload-171935: exit status 2 (376.062481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-171935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171935 -n no-preload-171935
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171935 -n no-preload-171935
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-172629 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-172629 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (56.892220292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-172639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1107 17:27:04.640751   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-172639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (36.221517883s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-172639 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-172639 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-172639 --alsologtostderr -v=3: (1.324572369s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172639 -n newest-cni-172639
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172639 -n newest-cni-172639: exit status 7 (113.240519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-172639 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-172639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-172639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (29.999624945s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172639 -n newest-cni-172639
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-172629 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [1b78f67f-1f67-47c8-9f63-9f29c2f17d03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [1b78f67f-1f67-47c8-9f63-9f29c2f17d03] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.011731792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-172629 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-172629 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-172629 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (24.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-172629 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-172629 --alsologtostderr -v=3: (24.039596611s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (24.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-172639 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-172639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172639 -n newest-cni-172639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172639 -n newest-cni-172639: exit status 2 (377.487668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172639 -n newest-cni-172639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172639 -n newest-cni-172639: exit status 2 (370.746409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-172639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172639 -n newest-cni-172639
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172639 -n newest-cni-172639
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (45.524164744s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629: exit status 7 (132.180087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-172629 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (570.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-172629 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-172629 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (9m30.12609697s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (570.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-171815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-171815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-rvtds" [844fcfed-6d72-4077-8701-3dcfb98401bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-rvtds" [844fcfed-6d72-4077-8701-3dcfb98401bf] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005823133s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fthq4" [4e103790-64de-4651-b721-2b38fc325d03] Pending
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fthq4" [4e103790-64de-4651-b721-2b38fc325d03] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fthq4" [4e103790-64de-4651-b721-2b38fc325d03] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.05294388s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-171815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-171815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-171815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-lcc8t" [2534242d-e81a-454b-b810-860ad7c35e9f] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014530191s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-171816 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-171816 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (47.606453687s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-lcc8t" [2534242d-e81a-454b-b810-860ad7c35e9f] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005788581s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-171920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fthq4" [4e103790-64de-4651-b721-2b38fc325d03] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006512834s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-172219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-171920 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-171920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171920 -n old-k8s-version-171920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171920 -n old-k8s-version-171920: exit status 2 (391.763869ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171920 -n old-k8s-version-171920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171920 -n old-k8s-version-171920: exit status 2 (372.581778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-171920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171920 -n old-k8s-version-171920

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171920 -n old-k8s-version-171920
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-172219 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-172219 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-172219 -n embed-certs-172219
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-172219 -n embed-certs-172219: exit status 2 (476.902333ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-172219 -n embed-certs-172219
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-172219 -n embed-certs-172219: exit status 2 (422.245102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-172219 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-172219 -n embed-certs-172219

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-172219 -n embed-certs-172219
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (105.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-171817 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-171817 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m45.554943151s)
--- PASS: TestNetworkPlugins/group/cilium/Start (105.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-zks4x" [d30ecd87-0101-45c5-a800-6c4e80a55148] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014964283s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-171816 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-171816 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jknl8" [f3553d2b-1c3e-4339-941c-ca7f3d5b78ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jknl8" [f3553d2b-1c3e-4339-941c-ca7f3d5b78ef] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005812092s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-171816 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-171816 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-171816 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (296.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1107 17:30:07.688221   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:30:25.176470   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.181757   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.191975   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.212231   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.252516   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.332958   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.493412   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:25.813558   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:26.454447   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:27.735575   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:30.296368   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:35.417141   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
E1107 17:30:45.657362   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (4m56.495915517s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (296.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-xrxhs" [e1e0bddf-1c4f-4032-9b9e-b8ead5961066] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015007932s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-171817 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-171817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-5xs6z" [b581e044-bdb1-4925-8b14-8ff41bdc225f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-5xs6z" [b581e044-bdb1-4925-8b14-8ff41bdc225f] Running
E1107 17:31:06.137611   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006345327s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-171817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-171817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-171817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E1107 17:31:24.700517   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:24.705824   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:24.716061   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:24.736968   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:24.777261   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:24.857584   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:25.018056   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:25.338484   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:25.978763   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:27.259168   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:29.819408   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:34.940054   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:45.180336   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/old-k8s-version-171920/client.crt: no such file or directory
E1107 17:31:47.097859   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/no-preload-171935/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-171815 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (36.235781659s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-171815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-171815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-d9gxv" [94a2aa61-0087-4a6b-af9f-89991fe1c12e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-d9gxv" [94a2aa61-0087-4a6b-af9f-89991fe1c12e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.006014082s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-171815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-171815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jvz8c" [8fe6663e-ced9-4236-80c7-ebf1cb5881a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jvz8c" [8fe6663e-ced9-4236-80c7-ebf1cb5881a1] Running
E1107 17:35:01.804013   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/kindnet-171816/client.crt: no such file or directory
E1107 17:35:02.468688   51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/auto-171815/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005515173s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-658cb" [9c999346-72c3-4581-9705-fea6f0ea7fad] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011781639s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-658cb" [9c999346-72c3-4581-9705-fea6f0ea7fad] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006412358s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-172629 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-172629 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-172629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629: exit status 2 (372.404721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629: exit status 2 (384.470925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-172629 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-172629 -n default-k8s-diff-port-172629
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    

Test skip (23/277)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-172629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-172629
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-171815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-171815
--- SKIP: TestNetworkPlugins/group/kubenet (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-171815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-171815
--- SKIP: TestNetworkPlugins/group/flannel (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-171816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-171816
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.30s)

                                                
                                    
Copied to clipboard