Test Report: Docker_Linux_containerd 15232

                    
                      42265a836106779db8612b4b59ef93e7cadd15f3:2022-10-31:26349
                    
                

Test fail (5/277)

Order failed test Duration
205 TestPreload 359.48
213 TestKubernetesUpgrade 579.54
317 TestNetworkPlugins/group/calico/Start 514.94
331 TestNetworkPlugins/group/enable-default-cni/DNS 330.78
334 TestNetworkPlugins/group/bridge/DNS 337.94
x
+
TestPreload (359.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1031 17:00:37.107419   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m1.343246526s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-165950 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-165950 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.803309779s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6
E1031 17:00:53.550006   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 17:03:41.813120   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 17:04:14.061215   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 17:05:04.857712   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (4m52.69353006s)

                                                
                                                
-- stdout --
	* [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node test-preload-165950 in cluster test-preload-165950
	* Pulling base image ...
	* Downloading Kubernetes v1.24.6 preload ...
	* Updating the running docker "test-preload-165950" container ...
	* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	* Configuring CNI (Container Networking Interface) ...
	X Problems detected in kubelet:
	  Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461    4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	  Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	  Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699    4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:00:53.400798  123788 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:00:53.400923  123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:00:53.400937  123788 out.go:309] Setting ErrFile to fd 2...
	I1031 17:00:53.400944  123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:00:53.401087  123788 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:00:53.401650  123788 out.go:303] Setting JSON to false
	I1031 17:00:53.402675  123788 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2603,"bootTime":1667233050,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:00:53.402746  123788 start.go:126] virtualization: kvm guest
	I1031 17:00:53.405697  123788 out.go:177] * [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:00:53.407231  123788 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:00:53.407135  123788 notify.go:220] Checking for updates...
	I1031 17:00:53.411021  123788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:00:53.412510  123788 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:00:53.414023  123788 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:00:53.415484  123788 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:00:53.417194  123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1031 17:00:53.419061  123788 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1031 17:00:53.420384  123788 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:00:53.448510  123788 docker.go:137] docker version: linux-20.10.21
	I1031 17:00:53.448586  123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:00:53.541306  123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.467933423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:00:53.541406  123788 docker.go:254] overlay module found
	I1031 17:00:53.543484  123788 out.go:177] * Using the docker driver based on existing profile
	I1031 17:00:53.544875  123788 start.go:282] selected driver: docker
	I1031 17:00:53.544894  123788 start.go:808] validating driver "docker" against &{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-165950 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:00:53.544985  123788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:00:53.545708  123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:00:53.643264  123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.565995365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:00:53.643528  123788 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:00:53.643548  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:00:53.643554  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:00:53.643565  123788 start_flags.go:317] config:
	{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:00:53.645909  123788 out.go:177] * Starting control plane node test-preload-165950 in cluster test-preload-165950
	I1031 17:00:53.647496  123788 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 17:00:53.648990  123788 out.go:177] * Pulling base image ...
	I1031 17:00:53.650498  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:00:53.650525  123788 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 17:00:53.672685  123788 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1031 17:00:53.672711  123788 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1031 17:00:53.749918  123788 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 17:00:53.750010  123788 cache.go:57] Caching tarball of preloaded images
	I1031 17:00:53.750392  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:00:53.752786  123788 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1031 17:00:53.754251  123788 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:53.854172  123788 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 17:00:56.444223  123788 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:56.444331  123788 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:57.333820  123788 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1031 17:00:57.333953  123788 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/config.json ...
	I1031 17:00:57.334153  123788 cache.go:208] Successfully downloaded all kic artifacts
	I1031 17:00:57.334182  123788 start.go:364] acquiring machines lock for test-preload-165950: {Name:mk5e2148763cdda5260ddcfe6c84de7081b8765d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:00:57.334270  123788 start.go:368] acquired machines lock for "test-preload-165950" in 68.35µs
	I1031 17:00:57.334286  123788 start.go:96] Skipping create...Using existing machine configuration
	I1031 17:00:57.334291  123788 fix.go:55] fixHost starting: 
	I1031 17:00:57.334493  123788 cli_runner.go:164] Run: docker container inspect test-preload-165950 --format={{.State.Status}}
	I1031 17:00:57.357514  123788 fix.go:103] recreateIfNeeded on test-preload-165950: state=Running err=<nil>
	W1031 17:00:57.357546  123788 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 17:00:57.360746  123788 out.go:177] * Updating the running docker "test-preload-165950" container ...
	I1031 17:00:57.362040  123788 machine.go:88] provisioning docker machine ...
	I1031 17:00:57.362068  123788 ubuntu.go:169] provisioning hostname "test-preload-165950"
	I1031 17:00:57.362115  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.384936  123788 main.go:134] libmachine: Using SSH client type: native
	I1031 17:00:57.385100  123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1031 17:00:57.385117  123788 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-165950 && echo "test-preload-165950" | sudo tee /etc/hostname
	I1031 17:00:57.508480  123788 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-165950
	
	I1031 17:00:57.508560  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.532320  123788 main.go:134] libmachine: Using SSH client type: native
	I1031 17:00:57.532481  123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1031 17:00:57.532510  123788 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-165950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-165950/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-165950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:00:57.648181  123788 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:00:57.648212  123788 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
	I1031 17:00:57.648234  123788 ubuntu.go:177] setting up certificates
	I1031 17:00:57.648244  123788 provision.go:83] configureAuth start
	I1031 17:00:57.648321  123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
	I1031 17:00:57.672013  123788 provision.go:138] copyHostCerts
	I1031 17:00:57.672105  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
	I1031 17:00:57.672125  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
	I1031 17:00:57.672195  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
	I1031 17:00:57.672283  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
	I1031 17:00:57.672295  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
	I1031 17:00:57.672323  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
	I1031 17:00:57.672372  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
	I1031 17:00:57.672381  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
	I1031 17:00:57.672407  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
	I1031 17:00:57.672455  123788 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.test-preload-165950 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-165950]
	I1031 17:00:57.797650  123788 provision.go:172] copyRemoteCerts
	I1031 17:00:57.797711  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:00:57.797742  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.822580  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:57.907487  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 17:00:57.925574  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1031 17:00:57.945093  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:00:57.962901  123788 provision.go:86] duration metric: configureAuth took 314.615745ms
	I1031 17:00:57.962927  123788 ubuntu.go:193] setting minikube options for container-runtime
	I1031 17:00:57.963104  123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1031 17:00:57.963117  123788 machine.go:91] provisioned docker machine in 601.061986ms
	I1031 17:00:57.963123  123788 start.go:300] post-start starting for "test-preload-165950" (driver="docker")
	I1031 17:00:57.963131  123788 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:00:57.963167  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:00:57.963199  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.987686  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.071508  123788 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:00:58.074511  123788 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1031 17:00:58.074535  123788 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1031 17:00:58.074543  123788 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1031 17:00:58.074549  123788 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1031 17:00:58.074562  123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
	I1031 17:00:58.074617  123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
	I1031 17:00:58.074698  123788 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
	I1031 17:00:58.074797  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:00:58.082460  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:00:58.099618  123788 start.go:303] post-start completed in 136.482468ms
	I1031 17:00:58.099687  123788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 17:00:58.099718  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.122912  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.204709  123788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1031 17:00:58.208921  123788 fix.go:57] fixHost completed within 874.623341ms
	I1031 17:00:58.208952  123788 start.go:83] releasing machines lock for "test-preload-165950", held for 874.669884ms
	I1031 17:00:58.209045  123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
	I1031 17:00:58.231368  123788 ssh_runner.go:195] Run: systemctl --version
	I1031 17:00:58.231411  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.231475  123788 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1031 17:00:58.231537  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.254909  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.256772  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.359932  123788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:00:58.370867  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:00:58.380533  123788 docker.go:189] disabling docker service ...
	I1031 17:00:58.380587  123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 17:00:58.390611  123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 17:00:58.400540  123788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 17:00:58.503571  123788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 17:00:58.601357  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 17:00:58.610768  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:00:58.623982  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1031 17:00:58.631971  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 17:00:58.639948  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 17:00:58.647731  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1031 17:00:58.655857  123788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:00:58.662159  123788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:00:58.668160  123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:00:58.765634  123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:00:58.838270  123788 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 17:00:58.838340  123788 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 17:00:58.842645  123788 start.go:472] Will wait 60s for crictl version
	I1031 17:00:58.842710  123788 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:00:58.873990  123788 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T17:00:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1031 17:01:09.921926  123788 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:01:09.945289  123788 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1031 17:01:09.945349  123788 ssh_runner.go:195] Run: containerd --version
	I1031 17:01:09.970198  123788 ssh_runner.go:195] Run: containerd --version
	I1031 17:01:09.996976  123788 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1031 17:01:09.998646  123788 cli_runner.go:164] Run: docker network inspect test-preload-165950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:01:10.021855  123788 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1031 17:01:10.025738  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:01:10.025795  123788 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:01:10.050811  123788 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1031 17:01:10.050875  123788 ssh_runner.go:195] Run: which lz4
	I1031 17:01:10.053855  123788 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1031 17:01:10.056765  123788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1031 17:01:10.056789  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1031 17:01:11.012204  123788 containerd.go:496] Took 0.958385 seconds to copy over tarball
	I1031 17:01:11.012279  123788 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:01:13.898440  123788 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.886126931s)
	I1031 17:01:13.898474  123788 containerd.go:503] Took 2.886238 seconds t extract the tarball
	I1031 17:01:13.898485  123788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:01:13.924871  123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:01:14.027291  123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:01:14.105585  123788 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:01:14.153742  123788 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 17:01:14.153832  123788 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:14.153879  123788 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:14.153933  123788 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:14.153950  123788 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:14.153997  123788 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:14.154093  123788 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1031 17:01:14.154143  123788 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:14.154158  123788 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:14.154858  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:14.154930  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:14.155027  123788 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:14.155037  123788 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1031 17:01:14.155035  123788 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:14.155041  123788 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:14.154859  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:14.155056  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:14.639297  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1031 17:01:14.649105  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1031 17:01:14.661797  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1031 17:01:14.676815  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1031 17:01:14.688769  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1031 17:01:14.693655  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1031 17:01:14.714906  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1031 17:01:14.949489  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1031 17:01:15.471396  123788 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1031 17:01:15.471444  123788 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:15.471487  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667668  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.005826513s)
	I1031 17:01:15.667922  123788 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1031 17:01:15.667990  123788 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:15.668043  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667834  123788 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1031 17:01:15.668185  123788 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1031 17:01:15.668229  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667889  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.018754573s)
	I1031 17:01:15.668329  123788 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1031 17:01:15.668357  123788 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:15.668378  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.675016  123788 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1031 17:01:15.675057  123788 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:15.675083  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.748343  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6": (1.05465106s)
	I1031 17:01:15.748403  123788 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1031 17:01:15.748433  123788 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:15.748479  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.773417  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.058470688s)
	I1031 17:01:15.773475  123788 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1031 17:01:15.773543  123788 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:15.773610  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.796393  123788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 17:01:15.796447  123788 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:15.796450  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:15.796474  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.796543  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:15.796574  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:15.796615  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1031 17:01:15.796661  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:15.796762  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:15.796793  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:15.849303  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:16.518326  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1031 17:01:16.518410  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1031 17:01:16.518448  123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.518466  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1031 17:01:16.518546  123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:16.518609  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1031 17:01:16.518661  123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1031 17:01:16.518667  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1031 17:01:16.519958  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1031 17:01:16.520022  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1031 17:01:16.520164  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 17:01:16.520245  123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:16.522338  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1031 17:01:16.522367  123788 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.522400  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.522738  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1031 17:01:16.522918  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1031 17:01:16.523532  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 17:01:23.289265  123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.766830961s)
	I1031 17:01:23.289325  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1031 17:01:23.289354  123788 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:23.289408  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:24.806710  123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.517273083s)
	I1031 17:01:24.806742  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1031 17:01:24.806797  123788 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1031 17:01:24.806862  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1031 17:01:24.985051  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1031 17:01:24.985104  123788 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:24.985171  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:25.471171  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 17:01:25.471237  123788 cache_images.go:92] LoadImages completed in 11.317456964s
	W1031 17:01:25.471403  123788 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
	I1031 17:01:25.471469  123788 ssh_runner.go:195] Run: sudo crictl info
	I1031 17:01:25.549548  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:01:25.549585  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:01:25.549601  123788 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:01:25.549618  123788 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-165950 NodeName:test-preload-165950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 17:01:25.549786  123788 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-165950"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:01:25.549897  123788 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-165950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:01:25.549966  123788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1031 17:01:25.559048  123788 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:01:25.559118  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:01:25.568146  123788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1031 17:01:25.583110  123788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:01:25.598681  123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1031 17:01:25.662413  123788 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1031 17:01:25.666268  123788 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950 for IP: 192.168.67.2
	I1031 17:01:25.666403  123788 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
	I1031 17:01:25.666458  123788 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
	I1031 17:01:25.666558  123788 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key
	I1031 17:01:25.666633  123788 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key.c7fa3a9e
	I1031 17:01:25.666689  123788 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key
	I1031 17:01:25.666801  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
	W1031 17:01:25.666847  123788 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
	I1031 17:01:25.666873  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:01:25.666908  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
	I1031 17:01:25.666943  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:01:25.666974  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
	I1031 17:01:25.667033  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:01:25.667673  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:01:25.690455  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:01:25.763539  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:01:25.790140  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:01:25.861083  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:01:25.879599  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:01:25.898515  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:01:25.922119  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:01:25.959078  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
	I1031 17:01:25.980032  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:01:26.000424  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
	I1031 17:01:26.053381  123788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 17:01:26.067535  123788 ssh_runner.go:195] Run: openssl version
	I1031 17:01:26.072627  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:01:26.080989  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.085427  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.085503  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.091369  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:01:26.099802  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
	I1031 17:01:26.108642  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.112303  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.112374  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.125705  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
	I1031 17:01:26.133946  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
	I1031 17:01:26.142159  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.145685  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.145748  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.150967  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:01:26.158917  123788 kubeadm.go:396] StartCluster: {Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:01:26.159010  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 17:01:26.159074  123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:01:26.185271  123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
	I1031 17:01:26.185298  123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
	I1031 17:01:26.185306  123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
	I1031 17:01:26.185314  123788 cri.go:87] found id: ""
	I1031 17:01:26.185368  123788 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1031 17:01:26.219799  123788 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","pid":2647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e/rootfs","created":"2022-10-31T17:00:48.864140497Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","pid":2192,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489/rootfs","created":"2022-10-31T17:00:31.051060479Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","pid":1627,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823/rootfs","created":"2022-10-31T17:00:11.805153802Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","pid":1649,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085
b591e2b35bc859465b29b1dbeeabec247e6d5bae53f/rootfs","created":"2022-10-31T17:00:11.813683549Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","pid":3587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322/rootfs","created":"2022-10-31T17:01:17.561189062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.
cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","pid":3592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95/rootfs","created":"2022-10-31T17:01:17.563577882Z","annotations":{"io.kubernetes.cri.container-type":"san
dbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","pid":3582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4/rootfs","created":"2022-10-31T17:01:17.563663092Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kub
ernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","pid":2448,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53/rootfs","created":"2022-10-31T17:00:34.399637747Z","annotations":{"io.kubernetes.cri.container-name":"kin
dnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2/rootfs","created":"2022-10-31T17:00:11.599833796Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"534d5230b
843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49/rootfs","created":"2022-10-31T17:00:11.816128062Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4
","io.kubernetes.cri.sandbox-id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","pid":3586,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f/rootfs","created":"2022-10-31T17:01:17.562640113Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","io.kubernetes.cri.sandbox-log-directory":"
/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","pid":2589,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1/rootfs","created":"2022-10-31T17:00:48.750343759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pod
s/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","pid":2648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf/rootfs","created":"2022-10-31T17:00:48.864144417Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.k
ubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","pid":3774,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e/rootfs","created":"2022-10-31T17:01:22.960016255Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","pid":151
2,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383/rootfs","created":"2022-10-31T17:00:11.597466189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2
c9ac6a5383","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383/rootfs","created":"2022-10-31T17:01:17.560848225Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb1
5588717a8a9","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9/rootfs","created":"2022-10-31T17:00:11.599047477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd161590496d96dfd772253e8fc04aa2
ace241cd015a3e030edb9980f0002865","pid":1515,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865/rootfs","created":"2022-10-31T17:00:11.599316909Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1d26ec1a24e08c41b4eed6cd4a281a
528dd2a96323f389584c153ebdccd783f","pid":3487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f/rootfs","created":"2022-10-31T17:01:17.250859Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c762f46164888
748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","pid":2229,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173/rootfs","created":"2022-10-31T17:00:31.186453275Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d","pid":1648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103
334a22b452c16d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d/rootfs","created":"2022-10-31T17:00:11.813286817Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","pid":3530,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3/rootfs","created":"2022-10
-31T17:01:17.36677191Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","pid":2590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2/rootfs","created":"2022-10-31T17:0
0:48.752117294Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","pid":3398,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7/rootfs","created":"20
22-10-31T17:01:17.15855458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","pid":2191,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb
9fa200bb8ba27ef4/rootfs","created":"2022-10-31T17:00:31.051134048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1031 17:01:26.220196  123788 cri.go:124] list returned 25 containers
	I1031 17:01:26.220215  123788 cri.go:127] container: {ID:08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e Status:running}
	I1031 17:01:26.220253  123788 cri.go:129] skipping 08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e - not in ps
	I1031 17:01:26.220265  123788 cri.go:127] container: {ID:08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 Status:running}
	I1031 17:01:26.220285  123788 cri.go:129] skipping 08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 - not in ps
	I1031 17:01:26.220298  123788 cri.go:127] container: {ID:0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 Status:running}
	I1031 17:01:26.220316  123788 cri.go:129] skipping 0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 - not in ps
	I1031 17:01:26.220327  123788 cri.go:127] container: {ID:0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f Status:running}
	I1031 17:01:26.220336  123788 cri.go:129] skipping 0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f - not in ps
	I1031 17:01:26.220347  123788 cri.go:127] container: {ID:10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 Status:running}
	I1031 17:01:26.220360  123788 cri.go:129] skipping 10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 - not in ps
	I1031 17:01:26.220369  123788 cri.go:127] container: {ID:1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 Status:running}
	I1031 17:01:26.220377  123788 cri.go:129] skipping 1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 - not in ps
	I1031 17:01:26.220385  123788 cri.go:127] container: {ID:24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 Status:running}
	I1031 17:01:26.220398  123788 cri.go:129] skipping 24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 - not in ps
	I1031 17:01:26.220409  123788 cri.go:127] container: {ID:4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 Status:running}
	I1031 17:01:26.220422  123788 cri.go:129] skipping 4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 - not in ps
	I1031 17:01:26.220433  123788 cri.go:127] container: {ID:534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 Status:running}
	I1031 17:01:26.220445  123788 cri.go:129] skipping 534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 - not in ps
	I1031 17:01:26.220456  123788 cri.go:127] container: {ID:715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 Status:running}
	I1031 17:01:26.220468  123788 cri.go:129] skipping 715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 - not in ps
	I1031 17:01:26.220479  123788 cri.go:127] container: {ID:72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f Status:running}
	I1031 17:01:26.220491  123788 cri.go:129] skipping 72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f - not in ps
	I1031 17:01:26.220498  123788 cri.go:127] container: {ID:8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 Status:running}
	I1031 17:01:26.220510  123788 cri.go:129] skipping 8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 - not in ps
	I1031 17:01:26.220522  123788 cri.go:127] container: {ID:91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf Status:running}
	I1031 17:01:26.220540  123788 cri.go:129] skipping 91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf - not in ps
	I1031 17:01:26.220551  123788 cri.go:127] container: {ID:92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e Status:running}
	I1031 17:01:26.220564  123788 cri.go:133] skipping {92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e running}: state = "running", want "paused"
	I1031 17:01:26.220578  123788 cri.go:127] container: {ID:a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 Status:running}
	I1031 17:01:26.220590  123788 cri.go:129] skipping a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 - not in ps
	I1031 17:01:26.220601  123788 cri.go:127] container: {ID:ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 Status:running}
	I1031 17:01:26.220614  123788 cri.go:129] skipping ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 - not in ps
	I1031 17:01:26.220625  123788 cri.go:127] container: {ID:ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 Status:running}
	I1031 17:01:26.220637  123788 cri.go:129] skipping ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 - not in ps
	I1031 17:01:26.220648  123788 cri.go:127] container: {ID:bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 Status:running}
	I1031 17:01:26.220660  123788 cri.go:129] skipping bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 - not in ps
	I1031 17:01:26.220670  123788 cri.go:127] container: {ID:c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f Status:running}
	I1031 17:01:26.220679  123788 cri.go:129] skipping c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f - not in ps
	I1031 17:01:26.220689  123788 cri.go:127] container: {ID:c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 Status:running}
	I1031 17:01:26.220702  123788 cri.go:129] skipping c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 - not in ps
	I1031 17:01:26.220712  123788 cri.go:127] container: {ID:ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d Status:running}
	I1031 17:01:26.220724  123788 cri.go:129] skipping ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d - not in ps
	I1031 17:01:26.220735  123788 cri.go:127] container: {ID:d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 Status:running}
	I1031 17:01:26.220749  123788 cri.go:129] skipping d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 - not in ps
	I1031 17:01:26.220764  123788 cri.go:127] container: {ID:ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 Status:running}
	I1031 17:01:26.220776  123788 cri.go:129] skipping ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 - not in ps
	I1031 17:01:26.220787  123788 cri.go:127] container: {ID:debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 Status:running}
	I1031 17:01:26.220800  123788 cri.go:129] skipping debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 - not in ps
	I1031 17:01:26.220811  123788 cri.go:127] container: {ID:e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 Status:running}
	I1031 17:01:26.220823  123788 cri.go:129] skipping e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 - not in ps
	I1031 17:01:26.220874  123788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:01:26.228503  123788 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1031 17:01:26.228526  123788 kubeadm.go:627] restartCluster start
	I1031 17:01:26.228569  123788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 17:01:26.242514  123788 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.243313  123788 kubeconfig.go:92] found "test-preload-165950" server: "https://192.168.67.2:8443"
	I1031 17:01:26.244383  123788 kapi.go:59] client config for test-preload-165950: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:01:26.245028  123788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 17:01:26.254439  123788 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-10-31 17:00:07.362490176 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-10-31 17:01:25.658180104 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1031 17:01:26.254466  123788 kubeadm.go:1114] stopping kube-system containers ...
	I1031 17:01:26.254477  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1031 17:01:26.254530  123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:01:26.279826  123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
	I1031 17:01:26.279858  123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
	I1031 17:01:26.279865  123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
	I1031 17:01:26.279880  123788 cri.go:87] found id: ""
	I1031 17:01:26.279886  123788 cri.go:232] Stopping containers: [9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e]
	I1031 17:01:26.279928  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:26.283140  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e
	I1031 17:01:26.348311  123788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 17:01:26.415283  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:01:26.422710  123788 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 31 17:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 31 17:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Oct 31 17:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 31 17:00 /etc/kubernetes/scheduler.conf
	
	I1031 17:01:26.422771  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1031 17:01:26.429820  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1031 17:01:26.436664  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1031 17:01:26.443399  123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.443466  123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1031 17:01:26.450583  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1031 17:01:26.457143  123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.457191  123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1031 17:01:26.463634  123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:01:26.471032  123788 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 17:01:26.471057  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:26.714848  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.216857  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.525201  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.574451  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.654849  123788 api_server.go:51] waiting for apiserver process to appear ...
	I1031 17:01:27.654955  123788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:01:27.673572  123788 api_server.go:71] duration metric: took 18.72073ms to wait for apiserver process to appear ...
	I1031 17:01:27.673610  123788 api_server.go:87] waiting for apiserver healthz status ...
	I1031 17:01:27.673630  123788 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1031 17:01:27.678700  123788 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1031 17:01:27.685812  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:27.685841  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:28.187382  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:28.187416  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:28.687372  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:28.687411  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:29.187825  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:29.187861  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:29.687064  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:29.687093  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1031 17:01:30.187422  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1031 17:01:30.686425  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1031 17:01:31.186366  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1031 17:01:35.664099  123788 api_server.go:140] control plane version: v1.24.6
	I1031 17:01:35.664206  123788 api_server.go:130] duration metric: took 7.990587678s to wait for apiserver health ...
	I1031 17:01:35.664232  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:01:35.664274  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:01:35.666396  123788 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:01:35.668255  123788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:01:35.857942  123788 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1031 17:01:35.857986  123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1031 17:01:35.965517  123788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:01:37.314933  123788 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.349353719s)
	I1031 17:01:37.314969  123788 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:01:37.323258  123788 system_pods.go:59] 8 kube-system pods found
	I1031 17:01:37.323308  123788 system_pods.go:61] "coredns-6d4b75cb6d-8wsrc" [8e76d465-ae9a-4121-b7ed-1ef94dd20b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 17:01:37.323319  123788 system_pods.go:61] "etcd-test-preload-165950" [1738672d-0339-423c-9013-d39e8cbb16c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 17:01:37.323333  123788 system_pods.go:61] "kindnet-jljff" [e66c31a9-8e36-4914-a086-32ba2b3dc004] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1031 17:01:37.323348  123788 system_pods.go:61] "kube-apiserver-test-preload-165950" [a505e0cf-4d56-47bf-865b-6052277ce195] Pending
	I1031 17:01:37.323358  123788 system_pods.go:61] "kube-controller-manager-test-preload-165950" [ebf46104-24d9-427e-b5af-643a80e0aceb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 17:01:37.323374  123788 system_pods.go:61] "kube-proxy-54b5q" [0ff95637-a367-440b-918f-495391f2f1cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 17:01:37.323384  123788 system_pods.go:61] "kube-scheduler-test-preload-165950" [5a7cd673-4c3a-4123-9be5-5f44a196a478] Pending
	I1031 17:01:37.323397  123788 system_pods.go:61] "storage-provisioner" [5031015c-081e-49e2-8d46-09fd879a755c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 17:01:37.323409  123788 system_pods.go:74] duration metric: took 8.433081ms to wait for pod list to return data ...
	I1031 17:01:37.323422  123788 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:01:37.326311  123788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1031 17:01:37.326342  123788 node_conditions.go:123] node cpu capacity is 8
	I1031 17:01:37.326356  123788 node_conditions.go:105] duration metric: took 2.929267ms to run NodePressure ...
	I1031 17:01:37.326375  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:37.573644  123788 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1031 17:01:37.578158  123788 kubeadm.go:778] kubelet initialised
	I1031 17:01:37.578189  123788 kubeadm.go:779] duration metric: took 4.510409ms waiting for restarted kubelet to initialise ...
	I1031 17:01:37.578198  123788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:01:37.583642  123788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:39.594948  123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:42.094075  123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:43.095366  123788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"True"
	I1031 17:01:43.095404  123788 pod_ready.go:81] duration metric: took 5.511730023s waiting for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:43.095417  123788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:45.107196  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:47.606767  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:50.106591  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:52.606128  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:55.106948  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:57.606675  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:59.606942  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:01.607143  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:03.607189  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:06.106997  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:08.606022  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:10.607066  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:12.607191  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:15.106122  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:17.106164  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:19.106356  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:21.106711  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:23.606999  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:26.106549  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:28.107170  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:30.606839  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:33.106308  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:35.606836  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:38.106617  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:40.107031  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:42.606997  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:45.105907  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:47.106139  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:49.606661  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:51.607461  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:54.107427  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:56.607579  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:59.106638  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:01.106850  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:03.606788  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:05.606874  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:08.106321  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:10.106538  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:12.106959  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:14.607205  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:16.607305  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:19.105988  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:21.106170  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:23.107105  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:25.607263  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:28.106356  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:30.107148  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:32.606490  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:35.105741  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:37.106647  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:39.106715  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:41.606595  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:44.106322  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:46.106599  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:48.106645  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:50.607046  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:53.106597  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:55.607036  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:58.106177  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:00.106478  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:02.106672  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:04.106777  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:06.606029  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:08.606391  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:10.606890  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:13.105929  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:15.106871  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:17.605837  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:19.606273  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:21.606690  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:23.608947  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:26.106036  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:28.106069  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:30.106922  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:32.606315  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:34.606779  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:36.607034  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:39.106139  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:41.106298  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:43.106379  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:45.606574  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:47.606629  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:50.106351  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:52.606744  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:55.106115  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:57.606837  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:00.107089  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:02.606977  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:05.106235  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:07.106494  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:09.606180  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:11.607064  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:14.106300  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:16.106339  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:18.605987  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:20.606927  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:23.106287  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:25.606564  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:28.106222  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:30.106425  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:32.607544  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:35.105790  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:37.106524  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:39.106668  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:41.606128  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:43.100897  123788 pod_ready.go:81] duration metric: took 4m0.005465717s waiting for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
	E1031 17:05:43.100926  123788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" (will not retry!)
	I1031 17:05:43.100947  123788 pod_ready.go:38] duration metric: took 4m5.522739337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:05:43.100986  123788 kubeadm.go:631] restartCluster took 4m16.872448037s
	W1031 17:05:43.101155  123788 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 17:05:43.101190  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:05:44.844963  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.743753735s)
	I1031 17:05:44.845025  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:05:44.855523  123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:05:44.862648  123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:05:44.862707  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:05:44.870144  123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:05:44.870199  123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:05:44.907996  123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1031 17:05:44.908047  123788 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:05:44.935802  123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:05:44.935928  123788 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:05:44.935973  123788 kubeadm.go:317] OS: Linux
	I1031 17:05:44.936020  123788 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:05:44.936060  123788 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:05:44.936139  123788 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:05:44.936189  123788 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:05:44.936256  123788 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:05:44.936353  123788 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:05:44.936421  123788 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:05:44.936478  123788 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:05:44.936542  123788 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:05:45.016629  123788 kubeadm.go:317] W1031 17:05:44.903005    6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:05:45.016840  123788 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:05:45.016930  123788 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:05:45.016992  123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1031 17:05:45.017027  123788 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1031 17:05:45.017070  123788 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1031 17:05:45.017152  123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1031 17:05:45.017213  123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1031 17:05:45.017401  123788 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:44.903005    6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:44.903005    6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1031 17:05:45.017440  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:05:45.355913  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:05:45.365437  123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:05:45.365484  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:05:45.372598  123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:05:45.372638  123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:05:45.410978  123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1031 17:05:45.411059  123788 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:05:45.437866  123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:05:45.437950  123788 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:05:45.438007  123788 kubeadm.go:317] OS: Linux
	I1031 17:05:45.438080  123788 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:05:45.438188  123788 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:05:45.438265  123788 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:05:45.438327  123788 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:05:45.438408  123788 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:05:45.438474  123788 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:05:45.438542  123788 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:05:45.438609  123788 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:05:45.438681  123788 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:05:45.506713  123788 kubeadm.go:317] W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:05:45.506996  123788 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:05:45.507114  123788 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:05:45.507178  123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1031 17:05:45.507221  123788 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1031 17:05:45.507264  123788 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1031 17:05:45.507371  123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1031 17:05:45.507485  123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1031 17:05:45.507500  123788 kubeadm.go:398] StartCluster complete in 4m19.348589229s
	I1031 17:05:45.507531  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:05:45.507575  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:05:45.530536  123788 cri.go:87] found id: ""
	I1031 17:05:45.530565  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.530573  123788 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:05:45.530579  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:05:45.530626  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:05:45.554752  123788 cri.go:87] found id: ""
	I1031 17:05:45.554777  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.554783  123788 logs.go:276] No container was found matching "etcd"
	I1031 17:05:45.554789  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:05:45.554831  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:05:45.578518  123788 cri.go:87] found id: ""
	I1031 17:05:45.578542  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.578548  123788 logs.go:276] No container was found matching "coredns"
	I1031 17:05:45.578554  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:05:45.578603  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:05:45.602333  123788 cri.go:87] found id: ""
	I1031 17:05:45.602356  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.602363  123788 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:05:45.602368  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:05:45.602408  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:05:45.625824  123788 cri.go:87] found id: ""
	I1031 17:05:45.625847  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.625853  123788 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:05:45.625859  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:05:45.625920  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:05:45.649488  123788 cri.go:87] found id: ""
	I1031 17:05:45.649513  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.649519  123788 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:05:45.649526  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:05:45.649574  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:05:45.672881  123788 cri.go:87] found id: ""
	I1031 17:05:45.672907  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.672914  123788 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:05:45.672920  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:05:45.672965  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:05:45.695705  123788 cri.go:87] found id: ""
	I1031 17:05:45.695729  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.695736  123788 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:05:45.695744  123788 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:05:45.695756  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:05:45.827779  123788 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:05:45.827803  123788 logs.go:123] Gathering logs for containerd ...
	I1031 17:05:45.827814  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:05:45.882431  123788 logs.go:123] Gathering logs for container status ...
	I1031 17:05:45.882482  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:05:45.908973  123788 logs.go:123] Gathering logs for kubelet ...
	I1031 17:05:45.909003  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:05:45.967611  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461    4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968060  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968229  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699    4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968390  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661728    4266 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968572  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661819    4266 projected.go:192] Error preparing data for projected volume kube-api-access-d8dpf for pod kube-system/coredns-6d4b75cb6d-8wsrc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968978  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661876    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf podName:8e76d465-ae9a-4121-b7ed-1ef94dd20b7e nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661860993 +0000 UTC m=+9.136341257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8dpf" (UniqueName: "kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf") pod "coredns-6d4b75cb6d-8wsrc" (UID: "8e76d465-ae9a-4121-b7ed-1ef94dd20b7e") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969129  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662000    4266 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969296  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662020    4266 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969441  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662225    4266 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969602  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662242    4266 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969778  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662330    4266 projected.go:192] Error preparing data for projected volume kube-api-access-5m45q for pod kube-system/kindnet-jljff: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970177  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662376    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q podName:e66c31a9-8e36-4914-a086-32ba2b3dc004 nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662359447 +0000 UTC m=+9.136839704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m45q" (UniqueName: "kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q") pod "kindnet-jljff" (UID: "e66c31a9-8e36-4914-a086-32ba2b3dc004") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970359  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662434    4266 projected.go:192] Error preparing data for projected volume kube-api-access-r84wv for pod kube-system/kube-proxy-54b5q: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970760  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662472    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv podName:0ff95637-a367-440b-918f-495391f2f1cf nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662457708 +0000 UTC m=+9.136937970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r84wv" (UniqueName: "kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv") pod "kube-proxy-54b5q" (UID: "0ff95637-a367-440b-918f-495391f2f1cf") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:45.991682  123788 logs.go:123] Gathering logs for dmesg ...
	I1031 17:05:45.991709  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1031 17:05:46.006370  123788 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1031 17:05:46.006406  123788 out.go:239] * 
	* 
	W1031 17:05:46.006520  123788 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:05:46.006538  123788 out.go:239] * 
	* 
	W1031 17:05:46.007299  123788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:05:46.010794  123788 out.go:177] X Problems detected in kubelet:
	I1031 17:05:46.012324  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461    4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.013853  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.015648  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699    4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.017937  123788 out.go:177] 
	W1031 17:05:46.019427  123788 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:05:46.019527  123788 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1031 17:05:46.019585  123788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1031 17:05:46.021064  123788 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-10-31 17:05:46.06141389 +0000 UTC m=+1788.836992767
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-165950
helpers_test.go:235: (dbg) docker inspect test-preload-165950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b",
	        "Created": "2022-10-31T16:59:51.480968101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-31T16:59:51.931253166Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/hosts",
	        "LogPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b-json.log",
	        "Name": "/test-preload-165950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-165950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-165950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70-init/diff:/var/lib/docker/overlay2/850407c9352fc6d39f5a61f0f7868bc687359dfa2a9e604aacedd9e4180b6b24/diff:/var/lib/docker/overlay2/21aaafded5bd8cd556e28d44c5789deca54d553c1b7434f81407bd7fcd1957e2/diff:/var/lib/docker/overlay2/6092cf791661e4cab1851c6157178d18fd0167b1f47a6bebec580856fb033b44/diff:/var/lib/docker/overlay2/de1b6fab5ea890ce9ec3ab284acb657037d204cfa01fe082b7ab7fb1c0539f4a/diff:/var/lib/docker/overlay2/4ce8b04194bb323d53c06b240875a6203e31c8f7f41d68021a3a9c268299cbed/diff:/var/lib/docker/overlay2/efdd112bff28ec4eeb4274df5357bc6a943d954bf3bb5969c95a3f396318e5f2/diff:/var/lib/docker/overlay2/bf27ecc71ffb48aba0eb712986cbc98c99838dc8b04631580d9a9495f718f594/diff:/var/lib/docker/overlay2/448bbda6d5530c89aca7714db71b5eb84689a6dba7ac558086a7568817db54ae/diff:/var/lib/docker/overlay2/b43560491d25a8924ac5cae2ec4dc68deb89b0f8f1e1b7a720313dc4eeb82428/diff:/var/lib/docker/overlay2/2027e3
3b3f092c531efa1f98cabb990a64b3ff51978a38e4261ef8e82655e56d/diff:/var/lib/docker/overlay2/40d06c11aaa05bdf4d5349d7d00fdf7d8f962768ce49b8f03d4d2d5a23706a83/diff:/var/lib/docker/overlay2/3a1bdaf48ececa097bf7b4c7e715cdc5045b596a2cb2bf0d2d335363c91b7763/diff:/var/lib/docker/overlay2/a37c63314afa70bd7e634537d33bcefbffbbe9f43c8aa45d9d42bd58cc3b0cf8/diff:/var/lib/docker/overlay2/ff91a87ac6071b8ab64a547410e1499ce95011395ea036dd714d0dd5129adb37/diff:/var/lib/docker/overlay2/aefdb5f8ac62063ccf24e1bc21262559900c234b9c151acd755a4b834d51fea9/diff:/var/lib/docker/overlay2/915c92a89aba7500f1323ec1a9c9a53d856e818f9776d9f9ed08bf36936d3e4a/diff:/var/lib/docker/overlay2/52c13726cbf2ed741bd08a4fd814eca88e84b1d329661e62d858be944b3756fa/diff:/var/lib/docker/overlay2/459b8ced782783b6c14513346d3291aeaa7bf95628d52d5734ceb8e3dc2bb34a/diff:/var/lib/docker/overlay2/15b295bfa3bda6886453bc187c23d72b25ee63f5085ee0f7f33e1c16159f3458/diff:/var/lib/docker/overlay2/23b0f6d1317fd997d142b8b463d727f2337496dada67bd1d2d3b0e9e864b6c6b/diff:/var/lib/d
ocker/overlay2/5865c95ad7cd03f9b4844f71209de766041b054c00595d5aec780c06ae768435/diff:/var/lib/docker/overlay2/efa08e39c835181ac59410e6fa91805bdf6038812cf9de2fe6166b28ddbd0551/diff:/var/lib/docker/overlay2/e0b9a735c6e765ddbdea44d18a2b26b9b2c3db322dca7fbab94d6e76ab322d51/diff:/var/lib/docker/overlay2/5643dd6e2ea4886915404d641ac2a2f0327156d44c5cd2960ec0ce17a61bedb2/diff:/var/lib/docker/overlay2/4f789b09379fe08af21ac5ede6a916c169e328eac752d559ecde59f6f36263ea/diff:/var/lib/docker/overlay2/4fdd55958a1cbe05aa4c0d860e201090b87575a39b37ea9555600f8cb3c2256c/diff:/var/lib/docker/overlay2/db64f95c578859a9eb3b7bb1debcf894e5466441c4c6c27c9a3eae7247029669/diff:/var/lib/docker/overlay2/6ea16e3482414ff15bfc6317e5fb3463df41afc3fa76d7b22ef86e1a735fbf2d/diff:/var/lib/docker/overlay2/2141b9e79d9eca44b4934f0ab5e90e3a7a6326ad619ce3e981da60d3b9397952/diff:/var/lib/docker/overlay2/ed7d69a3a4de28360197cbde205a3c218b2c785ad29581c25ae9d74275fbc3af/diff:/var/lib/docker/overlay2/7a003859a39e8ad3bd9681a6e25c7687c68b45396a9bd9309f5f2fc5a6d
b937f/diff:/var/lib/docker/overlay2/9f343157cfc9dd91c334ef0927fcbdff9b1c543bc670a05b547ad650c42a9e4e/diff:/var/lib/docker/overlay2/1895e41d6462ac28032e1938f1c755f37d5063dbfcfce66c80a1bb5542592f87/diff:/var/lib/docker/overlay2/139059382b6f47a4d917321fc96bb88b4e4496bc6d72d5c140f22414932cd23a/diff:/var/lib/docker/overlay2/877f4b5fd322b19211f62544018b39a1fc4b920707d11dc957cac06f2232d4b5/diff:/var/lib/docker/overlay2/7f935ec11ddf890b56355eff56a25f995efb95fe3f8718078d517e5126fc40af/diff:/var/lib/docker/overlay2/f746de1e06eaa48a0ff284cbeec7e6f78c3eb97d1a90e020d82d10c2654236e7/diff:/var/lib/docker/overlay2/f58fee49407523fa2a2a815cfb285f088abd1fc7b3196c3c1a6b27a8cc1d4a3f/diff:/var/lib/docker/overlay2/2f9e685ccc40a5063568a58dc39e286eab6aa4fd66ad71614b75fb8082c6c201/diff:/var/lib/docker/overlay2/5d49dd0a636da4d0a250625e83cf665e98dba840590d94ac41b6f345e76aa187/diff:/var/lib/docker/overlay2/818cc610ded8dc62555773ef1e35bea879ef657b00a70e6c878f5424f518134a/diff:/var/lib/docker/overlay2/c98da52ad37a10af980b89a4e4ddd50b85ffa2
12a2847b428571f2544cb3eeb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-165950",
	                "Source": "/var/lib/docker/volumes/test-preload-165950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-165950",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-165950",
	                "name.minikube.sigs.k8s.io": "test-preload-165950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfccc3b44d496a91df157bced05afac5b142fddf1d4354ac1695001e0e240870",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49277"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49276"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49274"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfccc3b44d49",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-165950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "31321530ec32",
	                        "test-preload-165950"
	                    ],
	                    "NetworkID": "e0378850a42df319b63eed4a878977d5ca7d60ed961bbdc3e2d810f624175c13",
	                    "EndpointID": "680c74d6c0d63034b9a03de0b279adb28e402d3efdcb722bb8ca748f3bbb5d9a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-165950 -n test-preload-165950
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-165950 -n test-preload-165950: exit status 2 (356.870202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-165950 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-165059 ssh -n                                                                 | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | multinode-165059-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt                       | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | multinode-165059:/home/docker/cp-test_multinode-165059-m03_multinode-165059.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-165059 ssh -n                                                                 | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | multinode-165059-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-165059 ssh -n multinode-165059 sudo cat                                       | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | /home/docker/cp-test_multinode-165059-m03_multinode-165059.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt                       | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | multinode-165059-m02:/home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-165059 ssh -n                                                                 | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | multinode-165059-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-165059 ssh -n multinode-165059-m02 sudo cat                                   | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	|         | /home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-165059 node stop m03                                                          | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
	| node    | multinode-165059 node start                                                             | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:54 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-165059                                                                | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC |                     |
	| stop    | -p multinode-165059                                                                     | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC | 31 Oct 22 16:54 UTC |
	| start   | -p multinode-165059                                                                     | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC | 31 Oct 22 16:56 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-165059                                                                | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC |                     |
	| node    | multinode-165059 node delete                                                            | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC | 31 Oct 22 16:56 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-165059 stop                                                                   | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC | 31 Oct 22 16:57 UTC |
	| start   | -p multinode-165059                                                                     | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:59 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-165059                                                                | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC |                     |
	| start   | -p multinode-165059-m02                                                                 | multinode-165059-m02 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-165059-m03                                                                 | multinode-165059-m03 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-165059                                                                 | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC |                     |
	| delete  | -p multinode-165059-m03                                                                 | multinode-165059-m03 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
	| delete  | -p multinode-165059                                                                     | multinode-165059     | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
	| start   | -p test-preload-165950                                                                  | test-preload-165950  | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 17:00 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-165950                                                                  | test-preload-165950  | jenkins | v1.27.1 | 31 Oct 22 17:00 UTC | 31 Oct 22 17:00 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-165950                                                                  | test-preload-165950  | jenkins | v1.27.1 | 31 Oct 22 17:00 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 17:00:53
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:00:53.400798  123788 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:00:53.400923  123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:00:53.400937  123788 out.go:309] Setting ErrFile to fd 2...
	I1031 17:00:53.400944  123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:00:53.401087  123788 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:00:53.401650  123788 out.go:303] Setting JSON to false
	I1031 17:00:53.402675  123788 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2603,"bootTime":1667233050,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:00:53.402746  123788 start.go:126] virtualization: kvm guest
	I1031 17:00:53.405697  123788 out.go:177] * [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:00:53.407231  123788 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:00:53.407135  123788 notify.go:220] Checking for updates...
	I1031 17:00:53.411021  123788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:00:53.412510  123788 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:00:53.414023  123788 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:00:53.415484  123788 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:00:53.417194  123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1031 17:00:53.419061  123788 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1031 17:00:53.420384  123788 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:00:53.448510  123788 docker.go:137] docker version: linux-20.10.21
	I1031 17:00:53.448586  123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:00:53.541306  123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.467933423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:00:53.541406  123788 docker.go:254] overlay module found
	I1031 17:00:53.543484  123788 out.go:177] * Using the docker driver based on existing profile
	I1031 17:00:53.544875  123788 start.go:282] selected driver: docker
	I1031 17:00:53.544894  123788 start.go:808] validating driver "docker" against &{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-165950 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:00:53.544985  123788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:00:53.545708  123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:00:53.643264  123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.565995365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:00:53.643528  123788 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:00:53.643548  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:00:53.643554  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:00:53.643565  123788 start_flags.go:317] config:
	{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:00:53.645909  123788 out.go:177] * Starting control plane node test-preload-165950 in cluster test-preload-165950
	I1031 17:00:53.647496  123788 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 17:00:53.648990  123788 out.go:177] * Pulling base image ...
	I1031 17:00:53.650498  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:00:53.650525  123788 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 17:00:53.672685  123788 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1031 17:00:53.672711  123788 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1031 17:00:53.749918  123788 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 17:00:53.750010  123788 cache.go:57] Caching tarball of preloaded images
	I1031 17:00:53.750392  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:00:53.752786  123788 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1031 17:00:53.754251  123788 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:53.854172  123788 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 17:00:56.444223  123788 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:56.444331  123788 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 17:00:57.333820  123788 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1031 17:00:57.333953  123788 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/config.json ...
	I1031 17:00:57.334153  123788 cache.go:208] Successfully downloaded all kic artifacts
	I1031 17:00:57.334182  123788 start.go:364] acquiring machines lock for test-preload-165950: {Name:mk5e2148763cdda5260ddcfe6c84de7081b8765d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:00:57.334270  123788 start.go:368] acquired machines lock for "test-preload-165950" in 68.35µs
	I1031 17:00:57.334286  123788 start.go:96] Skipping create...Using existing machine configuration
	I1031 17:00:57.334291  123788 fix.go:55] fixHost starting: 
	I1031 17:00:57.334493  123788 cli_runner.go:164] Run: docker container inspect test-preload-165950 --format={{.State.Status}}
	I1031 17:00:57.357514  123788 fix.go:103] recreateIfNeeded on test-preload-165950: state=Running err=<nil>
	W1031 17:00:57.357546  123788 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 17:00:57.360746  123788 out.go:177] * Updating the running docker "test-preload-165950" container ...
	I1031 17:00:57.362040  123788 machine.go:88] provisioning docker machine ...
	I1031 17:00:57.362068  123788 ubuntu.go:169] provisioning hostname "test-preload-165950"
	I1031 17:00:57.362115  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.384936  123788 main.go:134] libmachine: Using SSH client type: native
	I1031 17:00:57.385100  123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1031 17:00:57.385117  123788 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-165950 && echo "test-preload-165950" | sudo tee /etc/hostname
	I1031 17:00:57.508480  123788 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-165950
	
	I1031 17:00:57.508560  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.532320  123788 main.go:134] libmachine: Using SSH client type: native
	I1031 17:00:57.532481  123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1031 17:00:57.532510  123788 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-165950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-165950/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-165950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:00:57.648181  123788 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:00:57.648212  123788 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
	I1031 17:00:57.648234  123788 ubuntu.go:177] setting up certificates
	I1031 17:00:57.648244  123788 provision.go:83] configureAuth start
	I1031 17:00:57.648321  123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
	I1031 17:00:57.672013  123788 provision.go:138] copyHostCerts
	I1031 17:00:57.672105  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
	I1031 17:00:57.672125  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
	I1031 17:00:57.672195  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
	I1031 17:00:57.672283  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
	I1031 17:00:57.672295  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
	I1031 17:00:57.672323  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
	I1031 17:00:57.672372  123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
	I1031 17:00:57.672381  123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
	I1031 17:00:57.672407  123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
	I1031 17:00:57.672455  123788 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.test-preload-165950 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-165950]
	I1031 17:00:57.797650  123788 provision.go:172] copyRemoteCerts
	I1031 17:00:57.797711  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:00:57.797742  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.822580  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:57.907487  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 17:00:57.925574  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1031 17:00:57.945093  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:00:57.962901  123788 provision.go:86] duration metric: configureAuth took 314.615745ms
	I1031 17:00:57.962927  123788 ubuntu.go:193] setting minikube options for container-runtime
	I1031 17:00:57.963104  123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1031 17:00:57.963117  123788 machine.go:91] provisioned docker machine in 601.061986ms
	I1031 17:00:57.963123  123788 start.go:300] post-start starting for "test-preload-165950" (driver="docker")
	I1031 17:00:57.963131  123788 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:00:57.963167  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:00:57.963199  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:57.987686  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.071508  123788 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:00:58.074511  123788 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1031 17:00:58.074535  123788 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1031 17:00:58.074543  123788 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1031 17:00:58.074549  123788 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1031 17:00:58.074562  123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
	I1031 17:00:58.074617  123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
	I1031 17:00:58.074698  123788 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
	I1031 17:00:58.074797  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:00:58.082460  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:00:58.099618  123788 start.go:303] post-start completed in 136.482468ms
	I1031 17:00:58.099687  123788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 17:00:58.099718  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.122912  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.204709  123788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1031 17:00:58.208921  123788 fix.go:57] fixHost completed within 874.623341ms
	I1031 17:00:58.208952  123788 start.go:83] releasing machines lock for "test-preload-165950", held for 874.669884ms
	I1031 17:00:58.209045  123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
	I1031 17:00:58.231368  123788 ssh_runner.go:195] Run: systemctl --version
	I1031 17:00:58.231411  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.231475  123788 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1031 17:00:58.231537  123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
	I1031 17:00:58.254909  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.256772  123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
	I1031 17:00:58.359932  123788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:00:58.370867  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:00:58.380533  123788 docker.go:189] disabling docker service ...
	I1031 17:00:58.380587  123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 17:00:58.390611  123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 17:00:58.400540  123788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 17:00:58.503571  123788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 17:00:58.601357  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 17:00:58.610768  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:00:58.623982  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1031 17:00:58.631971  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 17:00:58.639948  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 17:00:58.647731  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1031 17:00:58.655857  123788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:00:58.662159  123788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:00:58.668160  123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:00:58.765634  123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:00:58.838270  123788 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 17:00:58.838340  123788 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 17:00:58.842645  123788 start.go:472] Will wait 60s for crictl version
	I1031 17:00:58.842710  123788 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:00:58.873990  123788 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T17:00:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1031 17:01:09.921926  123788 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:01:09.945289  123788 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1031 17:01:09.945349  123788 ssh_runner.go:195] Run: containerd --version
	I1031 17:01:09.970198  123788 ssh_runner.go:195] Run: containerd --version
	I1031 17:01:09.996976  123788 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1031 17:01:09.998646  123788 cli_runner.go:164] Run: docker network inspect test-preload-165950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:01:10.021855  123788 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1031 17:01:10.025738  123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 17:01:10.025795  123788 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:01:10.050811  123788 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1031 17:01:10.050875  123788 ssh_runner.go:195] Run: which lz4
	I1031 17:01:10.053855  123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:01:10.056765  123788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1031 17:01:10.056789  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1031 17:01:11.012204  123788 containerd.go:496] Took 0.958385 seconds to copy over tarball
	I1031 17:01:11.012279  123788 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:01:13.898440  123788 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.886126931s)
	I1031 17:01:13.898474  123788 containerd.go:503] Took 2.886238 seconds t extract the tarball
	I1031 17:01:13.898485  123788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:01:13.924871  123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:01:14.027291  123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:01:14.105585  123788 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:01:14.153742  123788 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 17:01:14.153832  123788 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:14.153879  123788 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:14.153933  123788 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:14.153950  123788 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:14.153997  123788 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:14.154093  123788 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1031 17:01:14.154143  123788 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:14.154158  123788 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:14.154858  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:14.154930  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:14.155027  123788 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:14.155037  123788 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1031 17:01:14.155035  123788 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:14.155041  123788 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:14.154859  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:14.155056  123788 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:14.639297  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1031 17:01:14.649105  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1031 17:01:14.661797  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1031 17:01:14.676815  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1031 17:01:14.688769  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1031 17:01:14.693655  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1031 17:01:14.714906  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1031 17:01:14.949489  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1031 17:01:15.471396  123788 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1031 17:01:15.471444  123788 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:15.471487  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667668  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.005826513s)
	I1031 17:01:15.667922  123788 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1031 17:01:15.667990  123788 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:15.668043  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667834  123788 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1031 17:01:15.668185  123788 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1031 17:01:15.668229  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.667889  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.018754573s)
	I1031 17:01:15.668329  123788 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1031 17:01:15.668357  123788 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:15.668378  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.675016  123788 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1031 17:01:15.675057  123788 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:15.675083  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.748343  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6": (1.05465106s)
	I1031 17:01:15.748403  123788 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1031 17:01:15.748433  123788 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:15.748479  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.773417  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.058470688s)
	I1031 17:01:15.773475  123788 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1031 17:01:15.773543  123788 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:15.773610  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.796393  123788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 17:01:15.796447  123788 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:15.796450  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1031 17:01:15.796474  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:15.796543  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1031 17:01:15.796574  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1031 17:01:15.796615  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1031 17:01:15.796661  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1031 17:01:15.796762  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1031 17:01:15.796793  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1031 17:01:15.849303  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:01:16.518326  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1031 17:01:16.518410  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1031 17:01:16.518448  123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.518466  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1031 17:01:16.518546  123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:16.518609  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1031 17:01:16.518661  123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1031 17:01:16.518667  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1031 17:01:16.519958  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1031 17:01:16.520022  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1031 17:01:16.520164  123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 17:01:16.520245  123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:16.522338  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1031 17:01:16.522367  123788 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.522400  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1031 17:01:16.522738  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1031 17:01:16.522918  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1031 17:01:16.523532  123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 17:01:23.289265  123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.766830961s)
	I1031 17:01:23.289325  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1031 17:01:23.289354  123788 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:23.289408  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1031 17:01:24.806710  123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.517273083s)
	I1031 17:01:24.806742  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1031 17:01:24.806797  123788 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1031 17:01:24.806862  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1031 17:01:24.985051  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1031 17:01:24.985104  123788 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:24.985171  123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:01:25.471171  123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 17:01:25.471237  123788 cache_images.go:92] LoadImages completed in 11.317456964s
	W1031 17:01:25.471403  123788 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
	I1031 17:01:25.471469  123788 ssh_runner.go:195] Run: sudo crictl info
	I1031 17:01:25.549548  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:01:25.549585  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:01:25.549601  123788 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:01:25.549618  123788 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-165950 NodeName:test-preload-165950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 17:01:25.549786  123788 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-165950"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:01:25.549897  123788 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-165950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:01:25.549966  123788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1031 17:01:25.559048  123788 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:01:25.559118  123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:01:25.568146  123788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1031 17:01:25.583110  123788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:01:25.598681  123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1031 17:01:25.662413  123788 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1031 17:01:25.666268  123788 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950 for IP: 192.168.67.2
	I1031 17:01:25.666403  123788 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
	I1031 17:01:25.666458  123788 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
	I1031 17:01:25.666558  123788 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key
	I1031 17:01:25.666633  123788 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key.c7fa3a9e
	I1031 17:01:25.666689  123788 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key
	I1031 17:01:25.666801  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
	W1031 17:01:25.666847  123788 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
	I1031 17:01:25.666873  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:01:25.666908  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
	I1031 17:01:25.666943  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:01:25.666974  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
	I1031 17:01:25.667033  123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:01:25.667673  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:01:25.690455  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:01:25.763539  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:01:25.790140  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:01:25.861083  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:01:25.879599  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:01:25.898515  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:01:25.922119  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:01:25.959078  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
	I1031 17:01:25.980032  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:01:26.000424  123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
	I1031 17:01:26.053381  123788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 17:01:26.067535  123788 ssh_runner.go:195] Run: openssl version
	I1031 17:01:26.072627  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:01:26.080989  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.085427  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.085503  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:01:26.091369  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:01:26.099802  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
	I1031 17:01:26.108642  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.112303  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.112374  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
	I1031 17:01:26.125705  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
	I1031 17:01:26.133946  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
	I1031 17:01:26.142159  123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.145685  123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.145748  123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
	I1031 17:01:26.150967  123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:01:26.158917  123788 kubeadm.go:396] StartCluster: {Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:01:26.159010  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 17:01:26.159074  123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:01:26.185271  123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
	I1031 17:01:26.185298  123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
	I1031 17:01:26.185306  123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
	I1031 17:01:26.185314  123788 cri.go:87] found id: ""
	I1031 17:01:26.185368  123788 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1031 17:01:26.219799  123788 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","pid":2647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e/rootfs","created":"2022-10-31T17:00:48.864140497Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","pid":2192,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489/rootfs","created":"2022-10-31T17:00:31.051060479Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","pid":1627,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823/rootfs","created":"2022-10-31T17:00:11.805153802Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","pid":1649,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085
b591e2b35bc859465b29b1dbeeabec247e6d5bae53f/rootfs","created":"2022-10-31T17:00:11.813683549Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","pid":3587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322/rootfs","created":"2022-10-31T17:01:17.561189062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.
cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","pid":3592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95/rootfs","created":"2022-10-31T17:01:17.563577882Z","annotations":{"io.kubernetes.cri.container-type":"san
dbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","pid":3582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4/rootfs","created":"2022-10-31T17:01:17.563663092Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kub
ernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","pid":2448,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53/rootfs","created":"2022-10-31T17:00:34.399637747Z","annotations":{"io.kubernetes.cri.container-name":"kin
dnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2/rootfs","created":"2022-10-31T17:00:11.599833796Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"534d5230b
843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49/rootfs","created":"2022-10-31T17:00:11.816128062Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4
","io.kubernetes.cri.sandbox-id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","pid":3586,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f/rootfs","created":"2022-10-31T17:01:17.562640113Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","io.kubernetes.cri.sandbox-log-directory":"
/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","pid":2589,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1/rootfs","created":"2022-10-31T17:00:48.750343759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pod
s/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","pid":2648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf/rootfs","created":"2022-10-31T17:00:48.864144417Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.k
ubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","pid":3774,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e/rootfs","created":"2022-10-31T17:01:22.960016255Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","pid":151
2,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383/rootfs","created":"2022-10-31T17:00:11.597466189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2
c9ac6a5383","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383/rootfs","created":"2022-10-31T17:01:17.560848225Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb1
5588717a8a9","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9/rootfs","created":"2022-10-31T17:00:11.599047477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd161590496d96dfd772253e8fc04aa2
ace241cd015a3e030edb9980f0002865","pid":1515,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865/rootfs","created":"2022-10-31T17:00:11.599316909Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1d26ec1a24e08c41b4eed6cd4a281a
528dd2a96323f389584c153ebdccd783f","pid":3487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f/rootfs","created":"2022-10-31T17:01:17.250859Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c762f46164888
748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","pid":2229,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173/rootfs","created":"2022-10-31T17:00:31.186453275Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d","pid":1648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103
334a22b452c16d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d/rootfs","created":"2022-10-31T17:00:11.813286817Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","pid":3530,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3/rootfs","created":"2022-10
-31T17:01:17.36677191Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","pid":2590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2/rootfs","created":"2022-10-31T17:0
0:48.752117294Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","pid":3398,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7/rootfs","created":"20
22-10-31T17:01:17.15855458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","pid":2191,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb
9fa200bb8ba27ef4/rootfs","created":"2022-10-31T17:00:31.051134048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1031 17:01:26.220196  123788 cri.go:124] list returned 25 containers
	I1031 17:01:26.220215  123788 cri.go:127] container: {ID:08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e Status:running}
	I1031 17:01:26.220253  123788 cri.go:129] skipping 08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e - not in ps
	I1031 17:01:26.220265  123788 cri.go:127] container: {ID:08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 Status:running}
	I1031 17:01:26.220285  123788 cri.go:129] skipping 08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 - not in ps
	I1031 17:01:26.220298  123788 cri.go:127] container: {ID:0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 Status:running}
	I1031 17:01:26.220316  123788 cri.go:129] skipping 0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 - not in ps
	I1031 17:01:26.220327  123788 cri.go:127] container: {ID:0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f Status:running}
	I1031 17:01:26.220336  123788 cri.go:129] skipping 0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f - not in ps
	I1031 17:01:26.220347  123788 cri.go:127] container: {ID:10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 Status:running}
	I1031 17:01:26.220360  123788 cri.go:129] skipping 10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 - not in ps
	I1031 17:01:26.220369  123788 cri.go:127] container: {ID:1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 Status:running}
	I1031 17:01:26.220377  123788 cri.go:129] skipping 1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 - not in ps
	I1031 17:01:26.220385  123788 cri.go:127] container: {ID:24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 Status:running}
	I1031 17:01:26.220398  123788 cri.go:129] skipping 24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 - not in ps
	I1031 17:01:26.220409  123788 cri.go:127] container: {ID:4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 Status:running}
	I1031 17:01:26.220422  123788 cri.go:129] skipping 4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 - not in ps
	I1031 17:01:26.220433  123788 cri.go:127] container: {ID:534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 Status:running}
	I1031 17:01:26.220445  123788 cri.go:129] skipping 534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 - not in ps
	I1031 17:01:26.220456  123788 cri.go:127] container: {ID:715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 Status:running}
	I1031 17:01:26.220468  123788 cri.go:129] skipping 715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 - not in ps
	I1031 17:01:26.220479  123788 cri.go:127] container: {ID:72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f Status:running}
	I1031 17:01:26.220491  123788 cri.go:129] skipping 72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f - not in ps
	I1031 17:01:26.220498  123788 cri.go:127] container: {ID:8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 Status:running}
	I1031 17:01:26.220510  123788 cri.go:129] skipping 8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 - not in ps
	I1031 17:01:26.220522  123788 cri.go:127] container: {ID:91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf Status:running}
	I1031 17:01:26.220540  123788 cri.go:129] skipping 91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf - not in ps
	I1031 17:01:26.220551  123788 cri.go:127] container: {ID:92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e Status:running}
	I1031 17:01:26.220564  123788 cri.go:133] skipping {92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e running}: state = "running", want "paused"
	I1031 17:01:26.220578  123788 cri.go:127] container: {ID:a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 Status:running}
	I1031 17:01:26.220590  123788 cri.go:129] skipping a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 - not in ps
	I1031 17:01:26.220601  123788 cri.go:127] container: {ID:ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 Status:running}
	I1031 17:01:26.220614  123788 cri.go:129] skipping ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 - not in ps
	I1031 17:01:26.220625  123788 cri.go:127] container: {ID:ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 Status:running}
	I1031 17:01:26.220637  123788 cri.go:129] skipping ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 - not in ps
	I1031 17:01:26.220648  123788 cri.go:127] container: {ID:bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 Status:running}
	I1031 17:01:26.220660  123788 cri.go:129] skipping bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 - not in ps
	I1031 17:01:26.220670  123788 cri.go:127] container: {ID:c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f Status:running}
	I1031 17:01:26.220679  123788 cri.go:129] skipping c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f - not in ps
	I1031 17:01:26.220689  123788 cri.go:127] container: {ID:c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 Status:running}
	I1031 17:01:26.220702  123788 cri.go:129] skipping c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 - not in ps
	I1031 17:01:26.220712  123788 cri.go:127] container: {ID:ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d Status:running}
	I1031 17:01:26.220724  123788 cri.go:129] skipping ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d - not in ps
	I1031 17:01:26.220735  123788 cri.go:127] container: {ID:d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 Status:running}
	I1031 17:01:26.220749  123788 cri.go:129] skipping d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 - not in ps
	I1031 17:01:26.220764  123788 cri.go:127] container: {ID:ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 Status:running}
	I1031 17:01:26.220776  123788 cri.go:129] skipping ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 - not in ps
	I1031 17:01:26.220787  123788 cri.go:127] container: {ID:debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 Status:running}
	I1031 17:01:26.220800  123788 cri.go:129] skipping debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 - not in ps
	I1031 17:01:26.220811  123788 cri.go:127] container: {ID:e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 Status:running}
	I1031 17:01:26.220823  123788 cri.go:129] skipping e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 - not in ps
	I1031 17:01:26.220874  123788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:01:26.228503  123788 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1031 17:01:26.228526  123788 kubeadm.go:627] restartCluster start
	I1031 17:01:26.228569  123788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 17:01:26.242514  123788 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.243313  123788 kubeconfig.go:92] found "test-preload-165950" server: "https://192.168.67.2:8443"
	I1031 17:01:26.244383  123788 kapi.go:59] client config for test-preload-165950: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:01:26.245028  123788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 17:01:26.254439  123788 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-10-31 17:00:07.362490176 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-10-31 17:01:25.658180104 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1031 17:01:26.254466  123788 kubeadm.go:1114] stopping kube-system containers ...
	I1031 17:01:26.254477  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1031 17:01:26.254530  123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:01:26.279826  123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
	I1031 17:01:26.279858  123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
	I1031 17:01:26.279865  123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
	I1031 17:01:26.279880  123788 cri.go:87] found id: ""
	I1031 17:01:26.279886  123788 cri.go:232] Stopping containers: [9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e]
	I1031 17:01:26.279928  123788 ssh_runner.go:195] Run: which crictl
	I1031 17:01:26.283140  123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e
	I1031 17:01:26.348311  123788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 17:01:26.415283  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:01:26.422710  123788 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 31 17:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 31 17:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Oct 31 17:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 31 17:00 /etc/kubernetes/scheduler.conf
	
	I1031 17:01:26.422771  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1031 17:01:26.429820  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1031 17:01:26.436664  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1031 17:01:26.443399  123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.443466  123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1031 17:01:26.450583  123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1031 17:01:26.457143  123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:01:26.457191  123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1031 17:01:26.463634  123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:01:26.471032  123788 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 17:01:26.471057  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:26.714848  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.216857  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.525201  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.574451  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:27.654849  123788 api_server.go:51] waiting for apiserver process to appear ...
	I1031 17:01:27.654955  123788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:01:27.673572  123788 api_server.go:71] duration metric: took 18.72073ms to wait for apiserver process to appear ...
	I1031 17:01:27.673610  123788 api_server.go:87] waiting for apiserver healthz status ...
	I1031 17:01:27.673630  123788 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1031 17:01:27.678700  123788 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1031 17:01:27.685812  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:27.685841  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:28.187382  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:28.187416  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:28.687372  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:28.687411  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:29.187825  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:29.187861  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1031 17:01:29.687064  123788 api_server.go:140] control plane version: v1.24.4
	W1031 17:01:29.687093  123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1031 17:01:30.187422  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1031 17:01:30.686425  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1031 17:01:31.186366  123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1031 17:01:35.664099  123788 api_server.go:140] control plane version: v1.24.6
	I1031 17:01:35.664206  123788 api_server.go:130] duration metric: took 7.990587678s to wait for apiserver health ...
	I1031 17:01:35.664232  123788 cni.go:95] Creating CNI manager for ""
	I1031 17:01:35.664274  123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:01:35.666396  123788 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:01:35.668255  123788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:01:35.857942  123788 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1031 17:01:35.857986  123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1031 17:01:35.965517  123788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:01:37.314933  123788 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.349353719s)
	I1031 17:01:37.314969  123788 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:01:37.323258  123788 system_pods.go:59] 8 kube-system pods found
	I1031 17:01:37.323308  123788 system_pods.go:61] "coredns-6d4b75cb6d-8wsrc" [8e76d465-ae9a-4121-b7ed-1ef94dd20b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 17:01:37.323319  123788 system_pods.go:61] "etcd-test-preload-165950" [1738672d-0339-423c-9013-d39e8cbb16c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 17:01:37.323333  123788 system_pods.go:61] "kindnet-jljff" [e66c31a9-8e36-4914-a086-32ba2b3dc004] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1031 17:01:37.323348  123788 system_pods.go:61] "kube-apiserver-test-preload-165950" [a505e0cf-4d56-47bf-865b-6052277ce195] Pending
	I1031 17:01:37.323358  123788 system_pods.go:61] "kube-controller-manager-test-preload-165950" [ebf46104-24d9-427e-b5af-643a80e0aceb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 17:01:37.323374  123788 system_pods.go:61] "kube-proxy-54b5q" [0ff95637-a367-440b-918f-495391f2f1cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 17:01:37.323384  123788 system_pods.go:61] "kube-scheduler-test-preload-165950" [5a7cd673-4c3a-4123-9be5-5f44a196a478] Pending
	I1031 17:01:37.323397  123788 system_pods.go:61] "storage-provisioner" [5031015c-081e-49e2-8d46-09fd879a755c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 17:01:37.323409  123788 system_pods.go:74] duration metric: took 8.433081ms to wait for pod list to return data ...
	I1031 17:01:37.323422  123788 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:01:37.326311  123788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1031 17:01:37.326342  123788 node_conditions.go:123] node cpu capacity is 8
	I1031 17:01:37.326356  123788 node_conditions.go:105] duration metric: took 2.929267ms to run NodePressure ...
	I1031 17:01:37.326375  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:01:37.573644  123788 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1031 17:01:37.578158  123788 kubeadm.go:778] kubelet initialised
	I1031 17:01:37.578189  123788 kubeadm.go:779] duration metric: took 4.510409ms waiting for restarted kubelet to initialise ...
	I1031 17:01:37.578198  123788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:01:37.583642  123788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:39.594948  123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:42.094075  123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:43.095366  123788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"True"
	I1031 17:01:43.095404  123788 pod_ready.go:81] duration metric: took 5.511730023s waiting for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:43.095417  123788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
	I1031 17:01:45.107196  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:47.606767  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:50.106591  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:52.606128  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:55.106948  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:57.606675  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:01:59.606942  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:01.607143  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:03.607189  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:06.106997  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:08.606022  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:10.607066  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:12.607191  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:15.106122  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:17.106164  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:19.106356  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:21.106711  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:23.606999  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:26.106549  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:28.107170  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:30.606839  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:33.106308  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:35.606836  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:38.106617  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:40.107031  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:42.606997  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:45.105907  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:47.106139  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:49.606661  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:51.607461  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:54.107427  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:56.607579  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:02:59.106638  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:01.106850  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:03.606788  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:05.606874  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:08.106321  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:10.106538  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:12.106959  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:14.607205  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:16.607305  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:19.105988  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:21.106170  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:23.107105  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:25.607263  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:28.106356  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:30.107148  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:32.606490  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:35.105741  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:37.106647  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:39.106715  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:41.606595  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:44.106322  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:46.106599  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:48.106645  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:50.607046  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:53.106597  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:55.607036  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:03:58.106177  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:00.106478  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:02.106672  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:04.106777  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:06.606029  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:08.606391  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:10.606890  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:13.105929  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:15.106871  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:17.605837  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:19.606273  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:21.606690  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:23.608947  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:26.106036  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:28.106069  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:30.106922  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:32.606315  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:34.606779  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:36.607034  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:39.106139  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:41.106298  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:43.106379  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:45.606574  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:47.606629  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:50.106351  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:52.606744  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:55.106115  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:04:57.606837  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:00.107089  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:02.606977  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:05.106235  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:07.106494  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:09.606180  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:11.607064  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:14.106300  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:16.106339  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:18.605987  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:20.606927  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:23.106287  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:25.606564  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:28.106222  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:30.106425  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:32.607544  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:35.105790  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:37.106524  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:39.106668  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:41.606128  123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
	I1031 17:05:43.100897  123788 pod_ready.go:81] duration metric: took 4m0.005465717s waiting for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
	E1031 17:05:43.100926  123788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" (will not retry!)
	I1031 17:05:43.100947  123788 pod_ready.go:38] duration metric: took 4m5.522739337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:05:43.100986  123788 kubeadm.go:631] restartCluster took 4m16.872448037s
	W1031 17:05:43.101155  123788 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 17:05:43.101190  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:05:44.844963  123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.743753735s)
	I1031 17:05:44.845025  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:05:44.855523  123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:05:44.862648  123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:05:44.862707  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:05:44.870144  123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:05:44.870199  123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:05:44.907996  123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1031 17:05:44.908047  123788 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:05:44.935802  123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:05:44.935928  123788 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:05:44.935973  123788 kubeadm.go:317] OS: Linux
	I1031 17:05:44.936020  123788 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:05:44.936060  123788 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:05:44.936139  123788 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:05:44.936189  123788 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:05:44.936256  123788 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:05:44.936353  123788 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:05:44.936421  123788 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:05:44.936478  123788 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:05:44.936542  123788 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:05:45.016629  123788 kubeadm.go:317] W1031 17:05:44.903005    6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:05:45.016840  123788 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:05:45.016930  123788 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:05:45.016992  123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1031 17:05:45.017027  123788 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1031 17:05:45.017070  123788 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1031 17:05:45.017152  123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1031 17:05:45.017213  123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1031 17:05:45.017401  123788 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:44.903005    6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1031 17:05:45.017440  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:05:45.355913  123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:05:45.365437  123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:05:45.365484  123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:05:45.372598  123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:05:45.372638  123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:05:45.410978  123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1031 17:05:45.411059  123788 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:05:45.437866  123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:05:45.437950  123788 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:05:45.438007  123788 kubeadm.go:317] OS: Linux
	I1031 17:05:45.438080  123788 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:05:45.438188  123788 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:05:45.438265  123788 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:05:45.438327  123788 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:05:45.438408  123788 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:05:45.438474  123788 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:05:45.438542  123788 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:05:45.438609  123788 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:05:45.438681  123788 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:05:45.506713  123788 kubeadm.go:317] W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:05:45.506996  123788 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:05:45.507114  123788 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:05:45.507178  123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1031 17:05:45.507221  123788 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1031 17:05:45.507264  123788 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1031 17:05:45.507371  123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1031 17:05:45.507485  123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1031 17:05:45.507500  123788 kubeadm.go:398] StartCluster complete in 4m19.348589229s
	I1031 17:05:45.507531  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:05:45.507575  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:05:45.530536  123788 cri.go:87] found id: ""
	I1031 17:05:45.530565  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.530573  123788 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:05:45.530579  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:05:45.530626  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:05:45.554752  123788 cri.go:87] found id: ""
	I1031 17:05:45.554777  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.554783  123788 logs.go:276] No container was found matching "etcd"
	I1031 17:05:45.554789  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:05:45.554831  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:05:45.578518  123788 cri.go:87] found id: ""
	I1031 17:05:45.578542  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.578548  123788 logs.go:276] No container was found matching "coredns"
	I1031 17:05:45.578554  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:05:45.578603  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:05:45.602333  123788 cri.go:87] found id: ""
	I1031 17:05:45.602356  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.602363  123788 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:05:45.602368  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:05:45.602408  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:05:45.625824  123788 cri.go:87] found id: ""
	I1031 17:05:45.625847  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.625853  123788 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:05:45.625859  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:05:45.625920  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:05:45.649488  123788 cri.go:87] found id: ""
	I1031 17:05:45.649513  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.649519  123788 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:05:45.649526  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:05:45.649574  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:05:45.672881  123788 cri.go:87] found id: ""
	I1031 17:05:45.672907  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.672914  123788 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:05:45.672920  123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:05:45.672965  123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:05:45.695705  123788 cri.go:87] found id: ""
	I1031 17:05:45.695729  123788 logs.go:274] 0 containers: []
	W1031 17:05:45.695736  123788 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:05:45.695744  123788 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:05:45.695756  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:05:45.827779  123788 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:05:45.827803  123788 logs.go:123] Gathering logs for containerd ...
	I1031 17:05:45.827814  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:05:45.882431  123788 logs.go:123] Gathering logs for container status ...
	I1031 17:05:45.882482  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:05:45.908973  123788 logs.go:123] Gathering logs for kubelet ...
	I1031 17:05:45.909003  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:05:45.967611  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461    4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968060  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968229  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699    4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968390  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661728    4266 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968572  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661819    4266 projected.go:192] Error preparing data for projected volume kube-api-access-d8dpf for pod kube-system/coredns-6d4b75cb6d-8wsrc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.968978  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661876    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf podName:8e76d465-ae9a-4121-b7ed-1ef94dd20b7e nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661860993 +0000 UTC m=+9.136341257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8dpf" (UniqueName: "kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf") pod "coredns-6d4b75cb6d-8wsrc" (UID: "8e76d465-ae9a-4121-b7ed-1ef94dd20b7e") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969129  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662000    4266 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969296  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662020    4266 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969441  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662225    4266 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969602  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662242    4266 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.969778  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662330    4266 projected.go:192] Error preparing data for projected volume kube-api-access-5m45q for pod kube-system/kindnet-jljff: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970177  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662376    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q podName:e66c31a9-8e36-4914-a086-32ba2b3dc004 nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662359447 +0000 UTC m=+9.136839704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m45q" (UniqueName: "kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q") pod "kindnet-jljff" (UID: "e66c31a9-8e36-4914-a086-32ba2b3dc004") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970359  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662434    4266 projected.go:192] Error preparing data for projected volume kube-api-access-r84wv for pod kube-system/kube-proxy-54b5q: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	W1031 17:05:45.970760  123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662472    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv podName:0ff95637-a367-440b-918f-495391f2f1cf nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662457708 +0000 UTC m=+9.136937970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r84wv" (UniqueName: "kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv") pod "kube-proxy-54b5q" (UID: "0ff95637-a367-440b-918f-495391f2f1cf") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:45.991682  123788 logs.go:123] Gathering logs for dmesg ...
	I1031 17:05:45.991709  123788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1031 17:05:46.006370  123788 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1031 17:05:46.006406  123788 out.go:239] * 
	W1031 17:05:46.006520  123788 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:05:46.006538  123788 out.go:239] * 
	W1031 17:05:46.007299  123788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:05:46.010794  123788 out.go:177] X Problems detected in kubelet:
	I1031 17:05:46.012324  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461    4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.013853  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580    4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.015648  123788 out.go:177]   Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699    4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
	I1031 17:05:46.017937  123788 out.go:177] 
	W1031 17:05:46.019427  123788 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1031 17:05:45.405956    6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:05:46.019527  123788 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1031 17:05:46.019585  123788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1031 17:05:46.021064  123788 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2022-10-31 16:59:52 UTC, end at Mon 2022-10-31 17:05:47 UTC. --
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.151462628Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.167398259Z" level=info msg="StopPodSandbox for \"this\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.167445136Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.184330546Z" level=info msg="StopPodSandbox for \"endpoint\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.184383668Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.200382110Z" level=info msg="StopPodSandbox for \"is\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.200443041Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.216361944Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.216425258Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.234264674Z" level=info msg="StopPodSandbox for \"please\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.234319247Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.250917604Z" level=info msg="StopPodSandbox for \"consider\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.250966395Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.267354061Z" level=info msg="StopPodSandbox for \"using\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.267406337Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.284043412Z" level=info msg="StopPodSandbox for \"full\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.284110906Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.300567351Z" level=info msg="StopPodSandbox for \"URL\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.300622686Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.316986446Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.317046155Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.333896652Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.333945909Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.351394870Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.351451080Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.008726] FS-Cache: N-key=[8] '81a00f0200000000'
	[Oct31 16:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct31 16:55] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000008] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +1.003479] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +2.015780] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +4.127615] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000034] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +8.191156] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000047] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[Oct31 16:58] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +1.026086] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +2.015755] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000005] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +4.163565] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[  +8.187227] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
	[  +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
	[Oct31 17:01] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000732] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.012252] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  17:05:47 up 48 min,  0 users,  load average: 0.31, 0.50, 0.66
	Linux test-preload-165950 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-10-31 16:59:52 UTC, end at Mon 2022-10-31 17:05:47 UTC. --
	Oct 31 17:04:11 test-preload-165950 kubelet[4266]: E1031 17:04:11.871643    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:04:24 test-preload-165950 kubelet[4266]: I1031 17:04:24.871748    4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
	Oct 31 17:04:24 test-preload-165950 kubelet[4266]: E1031 17:04:24.872128    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:04:36 test-preload-165950 kubelet[4266]: I1031 17:04:36.870972    4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
	Oct 31 17:04:37 test-preload-165950 kubelet[4266]: I1031 17:04:37.350432    4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
	Oct 31 17:04:37 test-preload-165950 kubelet[4266]: I1031 17:04:37.350765    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:04:37 test-preload-165950 kubelet[4266]: E1031 17:04:37.351229    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:04:42 test-preload-165950 kubelet[4266]: I1031 17:04:42.948654    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:04:42 test-preload-165950 kubelet[4266]: E1031 17:04:42.949066    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:04:46 test-preload-165950 kubelet[4266]: I1031 17:04:46.648249    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:04:46 test-preload-165950 kubelet[4266]: E1031 17:04:46.648648    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:04:47 test-preload-165950 kubelet[4266]: I1031 17:04:47.371699    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:04:47 test-preload-165950 kubelet[4266]: E1031 17:04:47.372027    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:05:00 test-preload-165950 kubelet[4266]: I1031 17:05:00.871106    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:05:00 test-preload-165950 kubelet[4266]: E1031 17:05:00.871519    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:05:11 test-preload-165950 kubelet[4266]: I1031 17:05:11.871811    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:05:11 test-preload-165950 kubelet[4266]: E1031 17:05:11.872226    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:05:26 test-preload-165950 kubelet[4266]: I1031 17:05:26.871149    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:05:26 test-preload-165950 kubelet[4266]: E1031 17:05:26.871524    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:05:37 test-preload-165950 kubelet[4266]: I1031 17:05:37.871828    4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
	Oct 31 17:05:37 test-preload-165950 kubelet[4266]: E1031 17:05:37.872202    4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
	Oct 31 17:05:43 test-preload-165950 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 31 17:05:43 test-preload-165950 kubelet[4266]: I1031 17:05:43.206951    4266 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 31 17:05:43 test-preload-165950 systemd[1]: kubelet.service: Succeeded.
	Oct 31 17:05:43 test-preload-165950 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 17:05:47.096749  128573 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-165950 -n test-preload-165950
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-165950 -n test-preload-165950: exit status 2 (351.96652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-165950" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-165950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-165950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-165950: (2.115498231s)
--- FAIL: TestPreload (359.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (579.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171032 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171032 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.520549591s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171032
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171032: (1.339394355s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171032 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171032 status --format={{.Host}}: exit status 7 (153.670122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171032 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171032 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m43.35331266s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171032] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-171032 in cluster kubernetes-upgrade-171032
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-171032" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Oct 31 17:19:17 kubernetes-upgrade-171032 kubelet[12551]: E1031 17:19:17.859999   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:19:18 kubernetes-upgrade-171032 kubelet[12563]: E1031 17:19:18.615680   12563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:19:19 kubernetes-upgrade-171032 kubelet[12573]: E1031 17:19:19.358228   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:11:24.834777  190637 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:11:24.834947  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:11:24.834957  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:11:24.834964  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:11:24.835077  190637 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:11:24.835710  190637 out.go:303] Setting JSON to false
	I1031 17:11:24.837666  190637 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3235,"bootTime":1667233050,"procs":1168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:11:24.837747  190637 start.go:126] virtualization: kvm guest
	I1031 17:11:24.841123  190637 out.go:177] * [kubernetes-upgrade-171032] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:11:24.842931  190637 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:11:24.842827  190637 notify.go:220] Checking for updates...
	I1031 17:11:24.845001  190637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:11:24.846875  190637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:11:24.860343  190637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:11:24.862606  190637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:11:24.864680  190637 config.go:180] Loaded profile config "kubernetes-upgrade-171032": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1031 17:11:24.865302  190637 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:11:24.914805  190637 docker.go:137] docker version: linux-20.10.21
	I1031 17:11:24.914940  190637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:11:25.078660  190637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-10-31 17:11:24.939191305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:11:25.078796  190637 docker.go:254] overlay module found
	I1031 17:11:25.080996  190637 out.go:177] * Using the docker driver based on existing profile
	I1031 17:11:25.082810  190637 start.go:282] selected driver: docker
	I1031 17:11:25.082837  190637 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-171032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171032 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:11:25.083017  190637 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:11:25.084398  190637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:11:25.240995  190637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-10-31 17:11:25.124477879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:11:25.241402  190637 cni.go:95] Creating CNI manager for ""
	I1031 17:11:25.241427  190637 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:11:25.241467  190637 start_flags.go:317] config:
	{Name:kubernetes-upgrade-171032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:11:25.244343  190637 out.go:177] * Starting control plane node kubernetes-upgrade-171032 in cluster kubernetes-upgrade-171032
	I1031 17:11:25.245904  190637 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 17:11:25.247591  190637 out.go:177] * Pulling base image ...
	I1031 17:11:25.249336  190637 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:11:25.249400  190637 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 17:11:25.249422  190637 cache.go:57] Caching tarball of preloaded images
	I1031 17:11:25.249480  190637 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 17:11:25.249736  190637 preload.go:174] Found /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:11:25.249750  190637 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1031 17:11:25.249898  190637 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/config.json ...
	I1031 17:11:25.291760  190637 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1031 17:11:25.291792  190637 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1031 17:11:25.291814  190637 cache.go:208] Successfully downloaded all kic artifacts
	I1031 17:11:25.291853  190637 start.go:364] acquiring machines lock for kubernetes-upgrade-171032: {Name:mk2d2465b4e10e89365064db684c006d80c47d98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:11:25.291985  190637 start.go:368] acquired machines lock for "kubernetes-upgrade-171032" in 92.041µs
	I1031 17:11:25.292015  190637 start.go:96] Skipping create...Using existing machine configuration
	I1031 17:11:25.292027  190637 fix.go:55] fixHost starting: 
	I1031 17:11:25.292379  190637 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-171032 --format={{.State.Status}}
	I1031 17:11:25.327118  190637 fix.go:103] recreateIfNeeded on kubernetes-upgrade-171032: state=Stopped err=<nil>
	W1031 17:11:25.327156  190637 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 17:11:25.330647  190637 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-171032" ...
	I1031 17:11:25.332299  190637 cli_runner.go:164] Run: docker start kubernetes-upgrade-171032
	I1031 17:11:25.832054  190637 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-171032 --format={{.State.Status}}
	I1031 17:11:25.874332  190637 kic.go:415] container "kubernetes-upgrade-171032" state is running.
	I1031 17:11:25.874813  190637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171032
	I1031 17:11:25.906771  190637 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/config.json ...
	I1031 17:11:25.906981  190637 machine.go:88] provisioning docker machine ...
	I1031 17:11:25.907008  190637 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-171032"
	I1031 17:11:25.907063  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:25.936198  190637 main.go:134] libmachine: Using SSH client type: native
	I1031 17:11:25.936425  190637 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49372 <nil> <nil>}
	I1031 17:11:25.936457  190637 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-171032 && echo "kubernetes-upgrade-171032" | sudo tee /etc/hostname
	I1031 17:11:25.937096  190637 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45442->127.0.0.1:49372: read: connection reset by peer
	I1031 17:11:29.072520  190637 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-171032
	
	I1031 17:11:29.072605  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.108034  190637 main.go:134] libmachine: Using SSH client type: native
	I1031 17:11:29.108313  190637 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49372 <nil> <nil>}
	I1031 17:11:29.108347  190637 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-171032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-171032/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-171032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:11:29.228657  190637 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:11:29.228689  190637 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
	I1031 17:11:29.228721  190637 ubuntu.go:177] setting up certificates
	I1031 17:11:29.228731  190637 provision.go:83] configureAuth start
	I1031 17:11:29.228777  190637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171032
	I1031 17:11:29.261214  190637 provision.go:138] copyHostCerts
	I1031 17:11:29.261275  190637 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
	I1031 17:11:29.261293  190637 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
	I1031 17:11:29.261375  190637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
	I1031 17:11:29.261484  190637 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
	I1031 17:11:29.261500  190637 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
	I1031 17:11:29.261541  190637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
	I1031 17:11:29.261620  190637 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
	I1031 17:11:29.261632  190637 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
	I1031 17:11:29.261668  190637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
	I1031 17:11:29.261752  190637 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-171032 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-171032]
	I1031 17:11:29.380311  190637 provision.go:172] copyRemoteCerts
	I1031 17:11:29.380395  190637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:11:29.380433  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.410731  190637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/kubernetes-upgrade-171032/id_rsa Username:docker}
	I1031 17:11:29.501040  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:11:29.521684  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 17:11:29.541754  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1031 17:11:29.561995  190637 provision.go:86] duration metric: configureAuth took 333.250644ms
	I1031 17:11:29.562031  190637 ubuntu.go:193] setting minikube options for container-runtime
	I1031 17:11:29.562225  190637 config.go:180] Loaded profile config "kubernetes-upgrade-171032": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:11:29.562245  190637 machine.go:91] provisioned docker machine in 3.655245969s
	I1031 17:11:29.562255  190637 start.go:300] post-start starting for "kubernetes-upgrade-171032" (driver="docker")
	I1031 17:11:29.562271  190637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:11:29.562321  190637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:11:29.562364  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.594481  190637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/kubernetes-upgrade-171032/id_rsa Username:docker}
	I1031 17:11:29.685682  190637 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:11:29.688756  190637 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1031 17:11:29.688787  190637 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1031 17:11:29.688798  190637 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1031 17:11:29.688804  190637 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1031 17:11:29.688816  190637 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
	I1031 17:11:29.688870  190637 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
	I1031 17:11:29.688970  190637 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
	I1031 17:11:29.689079  190637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:11:29.697856  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:11:29.719418  190637 start.go:303] post-start completed in 157.14027ms
	I1031 17:11:29.719513  190637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 17:11:29.719586  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.745204  190637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/kubernetes-upgrade-171032/id_rsa Username:docker}
	I1031 17:11:29.833143  190637 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1031 17:11:29.837271  190637 fix.go:57] fixHost completed within 4.545238945s
	I1031 17:11:29.837308  190637 start.go:83] releasing machines lock for "kubernetes-upgrade-171032", held for 4.545303069s
	I1031 17:11:29.837399  190637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-171032
	I1031 17:11:29.862618  190637 ssh_runner.go:195] Run: systemctl --version
	I1031 17:11:29.862655  190637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:11:29.862678  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.862718  190637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-171032
	I1031 17:11:29.894672  190637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/kubernetes-upgrade-171032/id_rsa Username:docker}
	I1031 17:11:29.895337  190637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/kubernetes-upgrade-171032/id_rsa Username:docker}
	I1031 17:11:29.980619  190637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:11:30.014933  190637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:11:30.026799  190637 docker.go:189] disabling docker service ...
	I1031 17:11:30.026867  190637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 17:11:30.037360  190637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 17:11:30.046595  190637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 17:11:30.133208  190637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 17:11:30.225083  190637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 17:11:30.235492  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:11:30.250161  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1031 17:11:30.259676  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 17:11:30.269063  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 17:11:30.278898  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1031 17:11:30.288378  190637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:11:30.296238  190637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:11:30.304136  190637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:11:30.396985  190637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:11:30.476666  190637 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 17:11:30.476752  190637 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 17:11:30.481461  190637 start.go:472] Will wait 60s for crictl version
	I1031 17:11:30.481525  190637 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:11:30.512760  190637 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T17:11:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1031 17:11:41.560241  190637 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:11:41.587491  190637 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1031 17:11:41.587559  190637 ssh_runner.go:195] Run: containerd --version
	I1031 17:11:41.615997  190637 ssh_runner.go:195] Run: containerd --version
	I1031 17:11:41.641399  190637 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1031 17:11:41.643015  190637 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-171032 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:11:41.671142  190637 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1031 17:11:41.674834  190637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:11:41.686696  190637 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1031 17:11:41.688328  190637 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:11:41.688406  190637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:11:41.716986  190637 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I1031 17:11:41.717057  190637 ssh_runner.go:195] Run: which lz4
	I1031 17:11:41.720234  190637 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1031 17:11:41.723251  190637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1031 17:11:41.723277  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (669534256 bytes)
	I1031 17:11:43.129879  190637 containerd.go:496] Took 1.409671 seconds to copy over tarball
	I1031 17:11:43.129957  190637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:11:45.836561  190637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706573755s)
	I1031 17:11:45.836598  190637 containerd.go:503] Took 2.706682 seconds t extract the tarball
	I1031 17:11:45.836609  190637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:11:45.945924  190637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:11:46.039714  190637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:11:46.121611  190637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:11:46.152202  190637 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 17:11:46.152309  190637 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:11:46.152340  190637 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I1031 17:11:46.152363  190637 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1031 17:11:46.152502  190637 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I1031 17:11:46.152528  190637 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I1031 17:11:46.152357  190637 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I1031 17:11:46.152317  190637 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I1031 17:11:46.152374  190637 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I1031 17:11:46.153415  190637 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I1031 17:11:46.153640  190637 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1031 17:11:46.153640  190637 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I1031 17:11:46.153646  190637 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:11:46.153646  190637 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I1031 17:11:46.153692  190637 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I1031 17:11:46.153707  190637 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I1031 17:11:46.153857  190637 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I1031 17:11:46.327558  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I1031 17:11:46.355803  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I1031 17:11:46.361452  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I1031 17:11:46.370443  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I1031 17:11:46.380629  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I1031 17:11:46.383567  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I1031 17:11:46.399602  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I1031 17:11:46.863840  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1031 17:11:47.063554  190637 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I1031 17:11:47.063607  190637 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I1031 17:11:47.063675  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.247871  190637 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I1031 17:11:47.247927  190637 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I1031 17:11:47.247971  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.254455  190637 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I1031 17:11:47.254535  190637 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I1031 17:11:47.254588  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.276910  190637 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I1031 17:11:47.276965  190637 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I1031 17:11:47.277006  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.277048  190637 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I1031 17:11:47.277089  190637 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I1031 17:11:47.277125  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.277126  190637 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I1031 17:11:47.277147  190637 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1031 17:11:47.277170  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.286403  190637 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I1031 17:11:47.286453  190637 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I1031 17:11:47.286493  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.417254  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I1031 17:11:47.417255  190637 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 17:11:47.417319  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I1031 17:11:47.417334  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I1031 17:11:47.417347  190637 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:11:47.417377  190637 ssh_runner.go:195] Run: which crictl
	I1031 17:11:47.417393  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I1031 17:11:47.417458  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I1031 17:11:47.417480  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I1031 17:11:47.417557  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I1031 17:11:48.014795  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I1031 17:11:48.014835  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I1031 17:11:48.014891  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I1031 17:11:48.014915  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1031 17:11:48.014933  190637 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:11:48.014982  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I1031 17:11:48.015057  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I1031 17:11:48.017169  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I1031 17:11:48.017305  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I1031 17:11:48.020383  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I1031 17:11:48.020374  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I1031 17:11:48.020389  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I1031 17:11:48.020467  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I1031 17:11:48.020493  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1031 17:11:48.020512  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I1031 17:11:48.020494  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1031 17:11:48.020463  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I1031 17:11:48.020612  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I1031 17:11:48.020567  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I1031 17:11:48.071819  190637 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 17:11:48.071861  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I1031 17:11:48.071888  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I1031 17:11:48.071914  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I1031 17:11:48.071919  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	I1031 17:11:48.071871  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I1031 17:11:48.071913  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I1031 17:11:48.072002  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I1031 17:11:48.071987  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I1031 17:11:48.071920  190637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:11:48.072181  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I1031 17:11:48.072222  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I1031 17:11:48.098011  190637 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1031 17:11:48.098058  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1031 17:11:48.148016  190637 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I1031 17:11:48.148150  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I1031 17:11:48.394835  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I1031 17:11:48.394878  190637 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I1031 17:11:48.394924  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I1031 17:11:49.373069  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I1031 17:11:49.373119  190637 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:11:49.373163  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1031 17:11:49.799504  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 17:11:49.799541  190637 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1031 17:11:49.799585  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1031 17:11:50.681124  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I1031 17:11:50.681181  190637 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I1031 17:11:50.681271  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I1031 17:11:52.563219  190637 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3: (1.881917205s)
	I1031 17:11:52.563246  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I1031 17:11:52.563280  190637 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1031 17:11:52.563355  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1031 17:11:55.790704  190637 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (3.227315654s)
	I1031 17:11:55.790737  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I1031 17:11:55.790766  190637 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1031 17:11:55.790810  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1031 17:11:57.079622  190637 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.288785244s)
	I1031 17:11:57.079650  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I1031 17:11:57.079672  190637 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I1031 17:11:57.079715  190637 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I1031 17:12:00.987077  190637 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.907330659s)
	I1031 17:12:00.987111  190637 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I1031 17:12:00.987140  190637 cache_images.go:123] Successfully loaded all cached images
	I1031 17:12:00.987150  190637 cache_images.go:92] LoadImages completed in 14.834914669s
	I1031 17:12:00.987216  190637 ssh_runner.go:195] Run: sudo crictl info
	I1031 17:12:01.014339  190637 cni.go:95] Creating CNI manager for ""
	I1031 17:12:01.014365  190637 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:12:01.014387  190637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:12:01.014411  190637 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-171032 NodeName:kubernetes-upgrade-171032 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 17:12:01.014600  190637 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-171032"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:12:01.014718  190637 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-171032 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:12:01.014778  190637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1031 17:12:01.025014  190637 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:12:01.025153  190637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:12:01.035390  190637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
	I1031 17:12:01.059841  190637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:12:01.080731  190637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I1031 17:12:01.102613  190637 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1031 17:12:01.107481  190637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:12:01.121871  190637 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032 for IP: 192.168.76.2
	I1031 17:12:01.122002  190637 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
	I1031 17:12:01.122053  190637 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
	I1031 17:12:01.122134  190637 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/client.key
	I1031 17:12:01.122248  190637 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/apiserver.key.31bdca25
	I1031 17:12:01.122303  190637 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/proxy-client.key
	I1031 17:12:01.122454  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
	W1031 17:12:01.122501  190637 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
	I1031 17:12:01.122512  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:12:01.122573  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
	I1031 17:12:01.122621  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:12:01.122656  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
	I1031 17:12:01.122709  190637 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:12:01.123595  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:12:01.146997  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 17:12:01.176457  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:12:01.203921  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:12:01.226210  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:12:01.247128  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:12:01.273841  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:12:01.295914  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:12:01.313709  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
	I1031 17:12:01.331788  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:12:01.356938  190637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
	I1031 17:12:01.385565  190637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 17:12:01.399097  190637 ssh_runner.go:195] Run: openssl version
	I1031 17:12:01.404263  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
	I1031 17:12:01.412657  190637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
	I1031 17:12:01.416330  190637 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
	I1031 17:12:01.416393  190637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
	I1031 17:12:01.421587  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:12:01.428793  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:12:01.436212  190637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:12:01.439868  190637 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:12:01.439926  190637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:12:01.445217  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:12:01.455101  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
	I1031 17:12:01.471746  190637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
	I1031 17:12:01.477276  190637 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
	I1031 17:12:01.477340  190637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
	I1031 17:12:01.485674  190637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
	I1031 17:12:01.493400  190637 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-171032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-171032 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:12:01.493487  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 17:12:01.493525  190637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:12:01.518295  190637 cri.go:87] found id: ""
	I1031 17:12:01.518370  190637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:12:01.525963  190637 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1031 17:12:01.525990  190637 kubeadm.go:627] restartCluster start
	I1031 17:12:01.526035  190637 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 17:12:01.533213  190637 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:12:01.534029  190637 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-171032" does not appear in /home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:12:01.534434  190637 kubeconfig.go:146] "kubernetes-upgrade-171032" context is missing from /home/jenkins/minikube-integration/15232-3650/kubeconfig - will repair!
	I1031 17:12:01.535066  190637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/kubeconfig: {Name:mkbe3dcb9ce3e3942a7be44b5e867e137f1872a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:12:01.536116  190637 kapi.go:59] client config for kubernetes-upgrade-171032: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kubernetes-upgrade-171032/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:12:01.536577  190637 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 17:12:01.544048  190637 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-10-31 17:10:54.034002503 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-10-31 17:12:01.092588840 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-171032
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1031 17:12:01.544086  190637 kubeadm.go:1114] stopping kube-system containers ...
	I1031 17:12:01.544099  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1031 17:12:01.544147  190637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:12:01.581581  190637 cri.go:87] found id: ""
	I1031 17:12:01.581651  190637 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 17:12:01.593846  190637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:12:01.601630  190637 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Oct 31 17:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Oct 31 17:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Oct 31 17:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Oct 31 17:10 /etc/kubernetes/scheduler.conf
	
	I1031 17:12:01.601686  190637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1031 17:12:01.609074  190637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1031 17:12:01.617387  190637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1031 17:12:01.624848  190637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1031 17:12:01.632201  190637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:12:01.639951  190637 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 17:12:01.639979  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:12:01.697457  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:12:02.396578  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:12:02.533807  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:12:02.585666  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:12:02.649131  190637 api_server.go:51] waiting for apiserver process to appear ...
	I1031 17:12:02.649211  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:03.159154  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:03.659389  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:04.159516  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:04.658679  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:05.159079  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:05.658729  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:06.158824  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:06.659421  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:07.159258  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:07.659428  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:08.159604  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:08.659203  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:09.159417  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:09.659463  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:10.159095  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:10.659453  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:11.159408  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:11.658972  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:12.158876  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:12.658659  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:13.159448  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:13.659252  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:14.159298  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:14.659223  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:15.159387  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:15.659144  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:16.159466  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:16.659542  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:17.159564  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:17.659252  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:18.159506  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:18.659488  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:19.159486  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:19.658818  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:20.159580  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:20.658953  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:21.159411  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:21.658579  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:22.159248  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:22.659378  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:23.159294  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:23.658556  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:24.159336  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:24.659067  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:25.159274  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:25.659138  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:26.159196  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:26.659240  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:27.158885  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:27.659402  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:28.159391  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:28.659528  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:29.158615  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:29.659469  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:30.159551  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:30.658542  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:31.159413  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:31.659591  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:32.159222  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:32.658967  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:33.158784  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:33.658648  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:34.158730  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:34.658994  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:35.158804  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:35.658754  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:36.159586  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:36.658621  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:37.158790  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:37.658784  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:38.158909  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:38.659298  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:39.159212  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:39.658703  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:40.158683  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:40.659374  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:41.158693  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:41.659050  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:42.158593  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:42.658555  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:43.158635  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:43.659290  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:44.159467  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:44.658772  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:45.159477  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:45.659595  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:46.158603  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:46.658608  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:47.159132  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:47.659593  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:48.158802  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:48.658796  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:49.158758  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:49.658629  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:50.159567  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:50.658768  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:51.159448  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:51.659602  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:52.158693  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:52.658602  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:53.158731  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:53.658887  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:54.158913  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:54.659036  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:55.158783  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:55.658931  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:56.158706  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:56.659404  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:57.159011  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:57.658974  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:58.159110  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:58.659541  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:59.159372  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:12:59.658650  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:00.159050  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:00.658903  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:01.158799  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:01.659461  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:02.159496  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:02.658783  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:02.658886  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:02.690496  190637 cri.go:87] found id: ""
	I1031 17:13:02.690530  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.690540  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:02.690548  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:02.690602  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:02.718482  190637 cri.go:87] found id: ""
	I1031 17:13:02.718518  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.718528  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:02.718537  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:02.718597  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:02.747641  190637 cri.go:87] found id: ""
	I1031 17:13:02.747675  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.747684  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:02.747694  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:02.747752  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:02.780767  190637 cri.go:87] found id: ""
	I1031 17:13:02.780802  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.780811  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:02.780819  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:02.780859  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:02.805890  190637 cri.go:87] found id: ""
	I1031 17:13:02.805914  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.805920  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:02.805926  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:02.805976  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:02.832102  190637 cri.go:87] found id: ""
	I1031 17:13:02.832135  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.832144  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:02.832151  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:02.832206  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:02.857465  190637 cri.go:87] found id: ""
	I1031 17:13:02.857492  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.857501  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:02.857508  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:02.857562  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:02.884875  190637 cri.go:87] found id: ""
	I1031 17:13:02.884905  190637 logs.go:274] 0 containers: []
	W1031 17:13:02.884914  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:02.884928  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:02.884943  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:02.903500  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:13 kubernetes-upgrade-171032 kubelet[1396]: E1031 17:12:13.107729    1396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.903902  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:13 kubernetes-upgrade-171032 kubelet[1411]: E1031 17:12:13.859139    1411 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.904306  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:14 kubernetes-upgrade-171032 kubelet[1424]: E1031 17:12:14.607641    1424 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.904664  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:15 kubernetes-upgrade-171032 kubelet[1439]: E1031 17:12:15.358151    1439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.905011  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:16 kubernetes-upgrade-171032 kubelet[1453]: E1031 17:12:16.106414    1453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.905388  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:16 kubernetes-upgrade-171032 kubelet[1468]: E1031 17:12:16.857285    1468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.905735  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:17 kubernetes-upgrade-171032 kubelet[1481]: E1031 17:12:17.608525    1481 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.906081  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:18 kubernetes-upgrade-171032 kubelet[1496]: E1031 17:12:18.358636    1496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.906426  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:19 kubernetes-upgrade-171032 kubelet[1510]: E1031 17:12:19.106478    1510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.906780  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:19 kubernetes-upgrade-171032 kubelet[1524]: E1031 17:12:19.857526    1524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.907123  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:20 kubernetes-upgrade-171032 kubelet[1537]: E1031 17:12:20.609253    1537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.907477  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:21 kubernetes-upgrade-171032 kubelet[1552]: E1031 17:12:21.359305    1552 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.907837  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:22 kubernetes-upgrade-171032 kubelet[1565]: E1031 17:12:22.106434    1565 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.908253  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:22 kubernetes-upgrade-171032 kubelet[1580]: E1031 17:12:22.857821    1580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.908607  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:23 kubernetes-upgrade-171032 kubelet[1592]: E1031 17:12:23.605474    1592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.908955  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:24 kubernetes-upgrade-171032 kubelet[1607]: E1031 17:12:24.357438    1607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.909313  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:25 kubernetes-upgrade-171032 kubelet[1620]: E1031 17:12:25.108211    1620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.909674  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:25 kubernetes-upgrade-171032 kubelet[1636]: E1031 17:12:25.857951    1636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.910022  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:26 kubernetes-upgrade-171032 kubelet[1649]: E1031 17:12:26.606073    1649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.910383  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:27 kubernetes-upgrade-171032 kubelet[1665]: E1031 17:12:27.356188    1665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.910732  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:28 kubernetes-upgrade-171032 kubelet[1678]: E1031 17:12:28.106711    1678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.911086  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:28 kubernetes-upgrade-171032 kubelet[1693]: E1031 17:12:28.857873    1693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.911442  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:29 kubernetes-upgrade-171032 kubelet[1706]: E1031 17:12:29.610537    1706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.911782  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:30 kubernetes-upgrade-171032 kubelet[1721]: E1031 17:12:30.356367    1721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.912175  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:31 kubernetes-upgrade-171032 kubelet[1734]: E1031 17:12:31.106788    1734 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.912538  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:31 kubernetes-upgrade-171032 kubelet[1749]: E1031 17:12:31.858217    1749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.912887  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:32 kubernetes-upgrade-171032 kubelet[1762]: E1031 17:12:32.614017    1762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.913240  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:33 kubernetes-upgrade-171032 kubelet[1776]: E1031 17:12:33.357826    1776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.913592  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1788]: E1031 17:12:34.106235    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.913933  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1803]: E1031 17:12:34.857360    1803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.914321  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:35 kubernetes-upgrade-171032 kubelet[1816]: E1031 17:12:35.607739    1816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.914685  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:36 kubernetes-upgrade-171032 kubelet[1832]: E1031 17:12:36.356573    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.915032  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1845]: E1031 17:12:37.109671    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.915396  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1860]: E1031 17:12:37.857533    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.915779  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:38 kubernetes-upgrade-171032 kubelet[1873]: E1031 17:12:38.606465    1873 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.916208  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:39 kubernetes-upgrade-171032 kubelet[1888]: E1031 17:12:39.359796    1888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.916587  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1901]: E1031 17:12:40.110249    1901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.916943  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1916]: E1031 17:12:40.860887    1916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.917297  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:41 kubernetes-upgrade-171032 kubelet[1929]: E1031 17:12:41.619766    1929 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.917644  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:42 kubernetes-upgrade-171032 kubelet[1944]: E1031 17:12:42.367299    1944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.918013  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1957]: E1031 17:12:43.109491    1957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.918362  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1973]: E1031 17:12:43.858797    1973 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.918718  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:44 kubernetes-upgrade-171032 kubelet[1988]: E1031 17:12:44.608242    1988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.919070  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:45 kubernetes-upgrade-171032 kubelet[2003]: E1031 17:12:45.358577    2003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.919428  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2016]: E1031 17:12:46.111102    2016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.919780  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2031]: E1031 17:12:46.859712    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.920153  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:47 kubernetes-upgrade-171032 kubelet[2044]: E1031 17:12:47.608777    2044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.920510  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:48 kubernetes-upgrade-171032 kubelet[2059]: E1031 17:12:48.358622    2059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.920851  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2072]: E1031 17:12:49.106888    2072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.921202  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2087]: E1031 17:12:49.857313    2087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.921562  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:50 kubernetes-upgrade-171032 kubelet[2100]: E1031 17:12:50.605889    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.921914  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:51 kubernetes-upgrade-171032 kubelet[2115]: E1031 17:12:51.356044    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.922272  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2128]: E1031 17:12:52.107119    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.922620  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2143]: E1031 17:12:52.859152    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.922971  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:53 kubernetes-upgrade-171032 kubelet[2157]: E1031 17:12:53.609135    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.923369  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:54 kubernetes-upgrade-171032 kubelet[2173]: E1031 17:12:54.356206    2173 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.923713  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2186]: E1031 17:12:55.109748    2186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.924061  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2201]: E1031 17:12:55.857812    2201 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.924432  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:56 kubernetes-upgrade-171032 kubelet[2214]: E1031 17:12:56.610074    2214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.924780  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:57 kubernetes-upgrade-171032 kubelet[2230]: E1031 17:12:57.357842    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.925135  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2243]: E1031 17:12:58.110596    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.925498  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2257]: E1031 17:12:58.858287    2257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.925863  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.926250  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.926640  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.926992  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:02.927350  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:02.927469  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:02.927488  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:02.944748  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:02.944788  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:03.004707  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:03.004732  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:03.004744  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:03.041790  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:03.041834  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:03.071951  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:03.071978  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:03.072115  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:03.072139  190637 out.go:239]   Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:03.072147  190637 out.go:239]   Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:03.072156  190637 out.go:239]   Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:03.072166  190637 out.go:239]   Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:03.072175  190637 out.go:239]   Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:03.072185  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:03.072192  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:13:13.072925  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:13.158673  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:13.158764  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:13.185680  190637 cri.go:87] found id: ""
	I1031 17:13:13.185709  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.185718  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:13.185726  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:13.185798  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:13.211394  190637 cri.go:87] found id: ""
	I1031 17:13:13.211423  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.211429  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:13.211437  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:13.211487  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:13.236126  190637 cri.go:87] found id: ""
	I1031 17:13:13.236156  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.236165  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:13.236173  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:13.236234  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:13.260166  190637 cri.go:87] found id: ""
	I1031 17:13:13.260190  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.260196  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:13.260202  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:13.260243  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:13.285333  190637 cri.go:87] found id: ""
	I1031 17:13:13.285357  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.285363  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:13.285369  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:13.285412  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:13.310763  190637 cri.go:87] found id: ""
	I1031 17:13:13.310793  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.310801  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:13.310809  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:13.310871  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:13.336303  190637 cri.go:87] found id: ""
	I1031 17:13:13.336330  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.336336  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:13.336342  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:13.336399  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:13.360587  190637 cri.go:87] found id: ""
	I1031 17:13:13.360610  190637 logs.go:274] 0 containers: []
	W1031 17:13:13.360616  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:13.360624  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:13.360636  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:13.418396  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:13.418424  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:13.418437  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:13.456879  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:13.456919  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:13.485080  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:13.485109  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:13.502111  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:23 kubernetes-upgrade-171032 kubelet[1592]: E1031 17:12:23.605474    1592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.502480  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:24 kubernetes-upgrade-171032 kubelet[1607]: E1031 17:12:24.357438    1607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.502836  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:25 kubernetes-upgrade-171032 kubelet[1620]: E1031 17:12:25.108211    1620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.503193  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:25 kubernetes-upgrade-171032 kubelet[1636]: E1031 17:12:25.857951    1636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.503550  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:26 kubernetes-upgrade-171032 kubelet[1649]: E1031 17:12:26.606073    1649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.503902  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:27 kubernetes-upgrade-171032 kubelet[1665]: E1031 17:12:27.356188    1665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.504297  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:28 kubernetes-upgrade-171032 kubelet[1678]: E1031 17:12:28.106711    1678 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.504645  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:28 kubernetes-upgrade-171032 kubelet[1693]: E1031 17:12:28.857873    1693 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.505005  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:29 kubernetes-upgrade-171032 kubelet[1706]: E1031 17:12:29.610537    1706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.505357  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:30 kubernetes-upgrade-171032 kubelet[1721]: E1031 17:12:30.356367    1721 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.505706  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:31 kubernetes-upgrade-171032 kubelet[1734]: E1031 17:12:31.106788    1734 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.506095  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:31 kubernetes-upgrade-171032 kubelet[1749]: E1031 17:12:31.858217    1749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.506452  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:32 kubernetes-upgrade-171032 kubelet[1762]: E1031 17:12:32.614017    1762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.506808  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:33 kubernetes-upgrade-171032 kubelet[1776]: E1031 17:12:33.357826    1776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.507250  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1788]: E1031 17:12:34.106235    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.507697  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1803]: E1031 17:12:34.857360    1803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.508056  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:35 kubernetes-upgrade-171032 kubelet[1816]: E1031 17:12:35.607739    1816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.508495  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:36 kubernetes-upgrade-171032 kubelet[1832]: E1031 17:12:36.356573    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.508857  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1845]: E1031 17:12:37.109671    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.509251  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1860]: E1031 17:12:37.857533    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.509675  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:38 kubernetes-upgrade-171032 kubelet[1873]: E1031 17:12:38.606465    1873 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.510033  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:39 kubernetes-upgrade-171032 kubelet[1888]: E1031 17:12:39.359796    1888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.510385  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1901]: E1031 17:12:40.110249    1901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.510735  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1916]: E1031 17:12:40.860887    1916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.511082  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:41 kubernetes-upgrade-171032 kubelet[1929]: E1031 17:12:41.619766    1929 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.511426  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:42 kubernetes-upgrade-171032 kubelet[1944]: E1031 17:12:42.367299    1944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.511776  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1957]: E1031 17:12:43.109491    1957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.512162  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1973]: E1031 17:12:43.858797    1973 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.512518  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:44 kubernetes-upgrade-171032 kubelet[1988]: E1031 17:12:44.608242    1988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.512884  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:45 kubernetes-upgrade-171032 kubelet[2003]: E1031 17:12:45.358577    2003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.513236  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2016]: E1031 17:12:46.111102    2016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.513594  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2031]: E1031 17:12:46.859712    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.513957  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:47 kubernetes-upgrade-171032 kubelet[2044]: E1031 17:12:47.608777    2044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.514312  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:48 kubernetes-upgrade-171032 kubelet[2059]: E1031 17:12:48.358622    2059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.514671  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2072]: E1031 17:12:49.106888    2072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.515046  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2087]: E1031 17:12:49.857313    2087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.515402  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:50 kubernetes-upgrade-171032 kubelet[2100]: E1031 17:12:50.605889    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.515758  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:51 kubernetes-upgrade-171032 kubelet[2115]: E1031 17:12:51.356044    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.516136  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2128]: E1031 17:12:52.107119    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.516488  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2143]: E1031 17:12:52.859152    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.516850  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:53 kubernetes-upgrade-171032 kubelet[2157]: E1031 17:12:53.609135    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.517206  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:54 kubernetes-upgrade-171032 kubelet[2173]: E1031 17:12:54.356206    2173 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.517557  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2186]: E1031 17:12:55.109748    2186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.517934  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2201]: E1031 17:12:55.857812    2201 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.518286  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:56 kubernetes-upgrade-171032 kubelet[2214]: E1031 17:12:56.610074    2214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.518635  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:57 kubernetes-upgrade-171032 kubelet[2230]: E1031 17:12:57.357842    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.519092  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2243]: E1031 17:12:58.110596    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.519517  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2257]: E1031 17:12:58.858287    2257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.519902  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.520341  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.521006  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.521406  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.521781  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.522131  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:03 kubernetes-upgrade-171032 kubelet[2472]: E1031 17:13:03.365265    2472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.522501  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2482]: E1031 17:13:04.113629    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.522868  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2492]: E1031 17:13:04.865625    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.523229  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:05 kubernetes-upgrade-171032 kubelet[2503]: E1031 17:13:05.616022    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.523620  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:06 kubernetes-upgrade-171032 kubelet[2514]: E1031 17:13:06.357325    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.523976  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2524]: E1031 17:13:07.108543    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.524368  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2535]: E1031 17:13:07.859345    2535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.524722  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:08 kubernetes-upgrade-171032 kubelet[2546]: E1031 17:13:08.610442    2546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.525085  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:09 kubernetes-upgrade-171032 kubelet[2557]: E1031 17:13:09.357591    2557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.525450  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.525801  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.526144  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.526499  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.526846  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:13.526961  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:13.526977  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:13.542747  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:13.542776  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:13.542893  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:13.542911  190637 out.go:239]   Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.542921  190637 out.go:239]   Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.542934  190637 out.go:239]   Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.542941  190637 out.go:239]   Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:13.542947  190637 out.go:239]   Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:13.542957  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:13.542964  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:13:23.543893  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:23.659097  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:23.659172  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:23.684173  190637 cri.go:87] found id: ""
	I1031 17:13:23.684204  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.684210  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:23.684216  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:23.684291  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:23.709006  190637 cri.go:87] found id: ""
	I1031 17:13:23.709037  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.709045  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:23.709057  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:23.709107  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:23.735825  190637 cri.go:87] found id: ""
	I1031 17:13:23.735855  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.735870  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:23.735881  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:23.735937  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:23.762041  190637 cri.go:87] found id: ""
	I1031 17:13:23.762064  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.762070  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:23.762075  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:23.762115  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:23.789436  190637 cri.go:87] found id: ""
	I1031 17:13:23.789469  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.789479  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:23.789489  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:23.789543  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:23.814192  190637 cri.go:87] found id: ""
	I1031 17:13:23.814219  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.814229  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:23.814237  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:23.814287  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:23.839027  190637 cri.go:87] found id: ""
	I1031 17:13:23.839053  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.839060  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:23.839067  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:23.839170  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:23.867623  190637 cri.go:87] found id: ""
	I1031 17:13:23.867650  190637 logs.go:274] 0 containers: []
	W1031 17:13:23.867657  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:23.867667  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:23.867678  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:23.886648  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1788]: E1031 17:12:34.106235    1788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.887020  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:34 kubernetes-upgrade-171032 kubelet[1803]: E1031 17:12:34.857360    1803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.887383  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:35 kubernetes-upgrade-171032 kubelet[1816]: E1031 17:12:35.607739    1816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.887730  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:36 kubernetes-upgrade-171032 kubelet[1832]: E1031 17:12:36.356573    1832 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.888131  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1845]: E1031 17:12:37.109671    1845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.888488  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:37 kubernetes-upgrade-171032 kubelet[1860]: E1031 17:12:37.857533    1860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.888941  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:38 kubernetes-upgrade-171032 kubelet[1873]: E1031 17:12:38.606465    1873 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.889517  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:39 kubernetes-upgrade-171032 kubelet[1888]: E1031 17:12:39.359796    1888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.889885  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1901]: E1031 17:12:40.110249    1901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.890239  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:40 kubernetes-upgrade-171032 kubelet[1916]: E1031 17:12:40.860887    1916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.890629  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:41 kubernetes-upgrade-171032 kubelet[1929]: E1031 17:12:41.619766    1929 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.891001  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:42 kubernetes-upgrade-171032 kubelet[1944]: E1031 17:12:42.367299    1944 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.891363  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1957]: E1031 17:12:43.109491    1957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.891706  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:43 kubernetes-upgrade-171032 kubelet[1973]: E1031 17:12:43.858797    1973 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.892120  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:44 kubernetes-upgrade-171032 kubelet[1988]: E1031 17:12:44.608242    1988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.892546  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:45 kubernetes-upgrade-171032 kubelet[2003]: E1031 17:12:45.358577    2003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.892933  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2016]: E1031 17:12:46.111102    2016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.893324  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2031]: E1031 17:12:46.859712    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.893672  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:47 kubernetes-upgrade-171032 kubelet[2044]: E1031 17:12:47.608777    2044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.894018  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:48 kubernetes-upgrade-171032 kubelet[2059]: E1031 17:12:48.358622    2059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.894368  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2072]: E1031 17:12:49.106888    2072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.894715  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2087]: E1031 17:12:49.857313    2087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.895064  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:50 kubernetes-upgrade-171032 kubelet[2100]: E1031 17:12:50.605889    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.895417  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:51 kubernetes-upgrade-171032 kubelet[2115]: E1031 17:12:51.356044    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.895769  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2128]: E1031 17:12:52.107119    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.896157  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2143]: E1031 17:12:52.859152    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.896511  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:53 kubernetes-upgrade-171032 kubelet[2157]: E1031 17:12:53.609135    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.896864  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:54 kubernetes-upgrade-171032 kubelet[2173]: E1031 17:12:54.356206    2173 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.897205  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2186]: E1031 17:12:55.109748    2186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.897555  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2201]: E1031 17:12:55.857812    2201 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.897904  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:56 kubernetes-upgrade-171032 kubelet[2214]: E1031 17:12:56.610074    2214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.898257  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:57 kubernetes-upgrade-171032 kubelet[2230]: E1031 17:12:57.357842    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.898608  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2243]: E1031 17:12:58.110596    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.898960  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2257]: E1031 17:12:58.858287    2257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.899337  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.899687  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.900032  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.900452  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.900807  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.901154  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:03 kubernetes-upgrade-171032 kubelet[2472]: E1031 17:13:03.365265    2472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.901530  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2482]: E1031 17:13:04.113629    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.901888  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2492]: E1031 17:13:04.865625    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.902236  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:05 kubernetes-upgrade-171032 kubelet[2503]: E1031 17:13:05.616022    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.902594  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:06 kubernetes-upgrade-171032 kubelet[2514]: E1031 17:13:06.357325    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.902944  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2524]: E1031 17:13:07.108543    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.903312  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2535]: E1031 17:13:07.859345    2535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.903658  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:08 kubernetes-upgrade-171032 kubelet[2546]: E1031 17:13:08.610442    2546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.904044  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:09 kubernetes-upgrade-171032 kubelet[2557]: E1031 17:13:09.357591    2557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.904478  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.904852  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.905231  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.905636  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.906036  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.906429  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2762]: E1031 17:13:13.856057    2762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.906802  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:14 kubernetes-upgrade-171032 kubelet[2773]: E1031 17:13:14.610703    2773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.907170  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:15 kubernetes-upgrade-171032 kubelet[2784]: E1031 17:13:15.364002    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.907546  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2795]: E1031 17:13:16.110911    2795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.907923  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2806]: E1031 17:13:16.859668    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.908332  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:17 kubernetes-upgrade-171032 kubelet[2817]: E1031 17:13:17.607219    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.908825  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:18 kubernetes-upgrade-171032 kubelet[2828]: E1031 17:13:18.357740    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.909409  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2839]: E1031 17:13:19.108336    2839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.909944  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2850]: E1031 17:13:19.857866    2850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.910479  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.911031  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.911546  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.912157  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:23.912677  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:23.912798  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:23.912816  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:23.929645  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:23.929692  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:24.012039  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:24.012060  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:24.012105  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:24.051286  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:24.051350  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:24.085564  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:24.085590  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:24.085710  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:24.085728  190637 out.go:239]   Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:24.085736  190637 out.go:239]   Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:24.085743  190637 out.go:239]   Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:24.085760  190637 out.go:239]   Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:24.085767  190637 out.go:239]   Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:24.085772  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:24.085779  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:13:34.086652  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:34.159061  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:34.159136  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:34.183959  190637 cri.go:87] found id: ""
	I1031 17:13:34.183987  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.183993  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:34.183998  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:34.184047  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:34.207982  190637 cri.go:87] found id: ""
	I1031 17:13:34.208005  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.208010  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:34.208021  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:34.208092  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:34.232685  190637 cri.go:87] found id: ""
	I1031 17:13:34.232711  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.232718  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:34.232725  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:34.232782  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:34.257719  190637 cri.go:87] found id: ""
	I1031 17:13:34.257754  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.257764  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:34.257772  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:34.257824  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:34.281983  190637 cri.go:87] found id: ""
	I1031 17:13:34.282029  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.282039  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:34.282047  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:34.282090  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:34.306301  190637 cri.go:87] found id: ""
	I1031 17:13:34.306344  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.306352  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:34.306358  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:34.306410  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:34.330057  190637 cri.go:87] found id: ""
	I1031 17:13:34.330086  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.330093  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:34.330099  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:34.330141  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:34.353367  190637 cri.go:87] found id: ""
	I1031 17:13:34.353394  190637 logs.go:274] 0 containers: []
	W1031 17:13:34.353403  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:34.353413  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:34.353426  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:34.370498  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:44 kubernetes-upgrade-171032 kubelet[1988]: E1031 17:12:44.608242    1988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.370892  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:45 kubernetes-upgrade-171032 kubelet[2003]: E1031 17:12:45.358577    2003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.371303  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2016]: E1031 17:12:46.111102    2016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.371689  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:46 kubernetes-upgrade-171032 kubelet[2031]: E1031 17:12:46.859712    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.372062  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:47 kubernetes-upgrade-171032 kubelet[2044]: E1031 17:12:47.608777    2044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.372450  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:48 kubernetes-upgrade-171032 kubelet[2059]: E1031 17:12:48.358622    2059 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.372816  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2072]: E1031 17:12:49.106888    2072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.373184  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:49 kubernetes-upgrade-171032 kubelet[2087]: E1031 17:12:49.857313    2087 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.373574  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:50 kubernetes-upgrade-171032 kubelet[2100]: E1031 17:12:50.605889    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.373939  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:51 kubernetes-upgrade-171032 kubelet[2115]: E1031 17:12:51.356044    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.374305  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2128]: E1031 17:12:52.107119    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.374697  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:52 kubernetes-upgrade-171032 kubelet[2143]: E1031 17:12:52.859152    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.375060  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:53 kubernetes-upgrade-171032 kubelet[2157]: E1031 17:12:53.609135    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.375436  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:54 kubernetes-upgrade-171032 kubelet[2173]: E1031 17:12:54.356206    2173 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.375803  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2186]: E1031 17:12:55.109748    2186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.376234  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2201]: E1031 17:12:55.857812    2201 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.376616  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:56 kubernetes-upgrade-171032 kubelet[2214]: E1031 17:12:56.610074    2214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.377006  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:57 kubernetes-upgrade-171032 kubelet[2230]: E1031 17:12:57.357842    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.377384  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2243]: E1031 17:12:58.110596    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.377745  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2257]: E1031 17:12:58.858287    2257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.378107  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.378494  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.378869  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.379218  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.379597  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.379952  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:03 kubernetes-upgrade-171032 kubelet[2472]: E1031 17:13:03.365265    2472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.380335  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2482]: E1031 17:13:04.113629    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.380731  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2492]: E1031 17:13:04.865625    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.381092  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:05 kubernetes-upgrade-171032 kubelet[2503]: E1031 17:13:05.616022    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.381451  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:06 kubernetes-upgrade-171032 kubelet[2514]: E1031 17:13:06.357325    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.381835  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2524]: E1031 17:13:07.108543    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.382196  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2535]: E1031 17:13:07.859345    2535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.382556  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:08 kubernetes-upgrade-171032 kubelet[2546]: E1031 17:13:08.610442    2546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.382933  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:09 kubernetes-upgrade-171032 kubelet[2557]: E1031 17:13:09.357591    2557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.383295  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.383649  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.384002  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.384416  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.384773  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.385127  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2762]: E1031 17:13:13.856057    2762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.385483  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:14 kubernetes-upgrade-171032 kubelet[2773]: E1031 17:13:14.610703    2773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.385837  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:15 kubernetes-upgrade-171032 kubelet[2784]: E1031 17:13:15.364002    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.386185  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2795]: E1031 17:13:16.110911    2795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.386537  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2806]: E1031 17:13:16.859668    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.386911  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:17 kubernetes-upgrade-171032 kubelet[2817]: E1031 17:13:17.607219    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.387262  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:18 kubernetes-upgrade-171032 kubelet[2828]: E1031 17:13:18.357740    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.387615  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2839]: E1031 17:13:19.108336    2839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.387963  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2850]: E1031 17:13:19.857866    2850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.388331  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.388688  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.389036  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.389412  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.389770  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.390124  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:24 kubernetes-upgrade-171032 kubelet[3049]: E1031 17:13:24.392205    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.390498  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3060]: E1031 17:13:25.108044    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.390847  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3071]: E1031 17:13:25.859282    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.391200  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:26 kubernetes-upgrade-171032 kubelet[3083]: E1031 17:13:26.610036    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.391550  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:27 kubernetes-upgrade-171032 kubelet[3094]: E1031 17:13:27.357276    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.391901  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3105]: E1031 17:13:28.110002    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.392331  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3117]: E1031 17:13:28.857456    3117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.392693  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:29 kubernetes-upgrade-171032 kubelet[3127]: E1031 17:13:29.606648    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.393079  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:30 kubernetes-upgrade-171032 kubelet[3138]: E1031 17:13:30.356492    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.393435  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.393785  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.394182  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.394538  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.394891  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:34.395022  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:34.395042  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:34.410496  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:34.410525  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:34.468053  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:34.468103  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:34.468117  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:34.502218  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:34.502258  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:34.529534  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:34.529563  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:34.529672  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:34.529687  190637 out.go:239]   Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.529702  190637 out.go:239]   Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.529721  190637 out.go:239]   Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.529733  190637 out.go:239]   Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:34.529741  190637 out.go:239]   Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:34.529746  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:34.529755  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:13:44.531457  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:44.659130  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:44.659210  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:44.687500  190637 cri.go:87] found id: ""
	I1031 17:13:44.687531  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.687540  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:44.687549  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:44.687617  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:44.716207  190637 cri.go:87] found id: ""
	I1031 17:13:44.716236  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.716244  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:44.716252  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:44.716319  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:44.747081  190637 cri.go:87] found id: ""
	I1031 17:13:44.747119  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.747127  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:44.747135  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:44.747177  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:44.776593  190637 cri.go:87] found id: ""
	I1031 17:13:44.776626  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.776636  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:44.776641  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:44.776693  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:44.804393  190637 cri.go:87] found id: ""
	I1031 17:13:44.804418  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.804426  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:44.804433  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:44.804482  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:44.830431  190637 cri.go:87] found id: ""
	I1031 17:13:44.830464  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.830479  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:44.830487  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:44.830542  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:44.862326  190637 cri.go:87] found id: ""
	I1031 17:13:44.862356  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.862365  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:44.862374  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:44.862436  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:44.889428  190637 cri.go:87] found id: ""
	I1031 17:13:44.889536  190637 logs.go:274] 0 containers: []
	W1031 17:13:44.889549  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:44.889561  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:44.889581  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:44.910019  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:44.910059  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:44.981733  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:44.981757  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:44.981769  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:45.021545  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:45.021582  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:45.054564  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:45.054599  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:45.072042  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2186]: E1031 17:12:55.109748    2186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.072476  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:55 kubernetes-upgrade-171032 kubelet[2201]: E1031 17:12:55.857812    2201 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.072821  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:56 kubernetes-upgrade-171032 kubelet[2214]: E1031 17:12:56.610074    2214 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.073173  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:57 kubernetes-upgrade-171032 kubelet[2230]: E1031 17:12:57.357842    2230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.073522  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2243]: E1031 17:12:58.110596    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.073870  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:58 kubernetes-upgrade-171032 kubelet[2257]: E1031 17:12:58.858287    2257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.074223  190637 logs.go:138] Found kubelet problem: Oct 31 17:12:59 kubernetes-upgrade-171032 kubelet[2270]: E1031 17:12:59.608888    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.074765  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:00 kubernetes-upgrade-171032 kubelet[2285]: E1031 17:13:00.355662    2285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.075363  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2298]: E1031 17:13:01.107083    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.075943  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:01 kubernetes-upgrade-171032 kubelet[2313]: E1031 17:13:01.867427    2313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.076624  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:02 kubernetes-upgrade-171032 kubelet[2326]: E1031 17:13:02.621962    2326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.077147  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:03 kubernetes-upgrade-171032 kubelet[2472]: E1031 17:13:03.365265    2472 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.077517  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2482]: E1031 17:13:04.113629    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.077877  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:04 kubernetes-upgrade-171032 kubelet[2492]: E1031 17:13:04.865625    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.078235  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:05 kubernetes-upgrade-171032 kubelet[2503]: E1031 17:13:05.616022    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.078597  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:06 kubernetes-upgrade-171032 kubelet[2514]: E1031 17:13:06.357325    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.079107  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2524]: E1031 17:13:07.108543    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.079535  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2535]: E1031 17:13:07.859345    2535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.079888  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:08 kubernetes-upgrade-171032 kubelet[2546]: E1031 17:13:08.610442    2546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.080294  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:09 kubernetes-upgrade-171032 kubelet[2557]: E1031 17:13:09.357591    2557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.080640  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.080985  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.081327  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.081675  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.082044  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.082390  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2762]: E1031 17:13:13.856057    2762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.082838  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:14 kubernetes-upgrade-171032 kubelet[2773]: E1031 17:13:14.610703    2773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.083463  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:15 kubernetes-upgrade-171032 kubelet[2784]: E1031 17:13:15.364002    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.084101  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2795]: E1031 17:13:16.110911    2795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.084701  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2806]: E1031 17:13:16.859668    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.085298  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:17 kubernetes-upgrade-171032 kubelet[2817]: E1031 17:13:17.607219    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.085808  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:18 kubernetes-upgrade-171032 kubelet[2828]: E1031 17:13:18.357740    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.086189  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2839]: E1031 17:13:19.108336    2839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.086583  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2850]: E1031 17:13:19.857866    2850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.087035  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.087567  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.088217  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.088687  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.089040  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.089436  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:24 kubernetes-upgrade-171032 kubelet[3049]: E1031 17:13:24.392205    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.089791  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3060]: E1031 17:13:25.108044    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.090145  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3071]: E1031 17:13:25.859282    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.090491  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:26 kubernetes-upgrade-171032 kubelet[3083]: E1031 17:13:26.610036    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.090845  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:27 kubernetes-upgrade-171032 kubelet[3094]: E1031 17:13:27.357276    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.091237  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3105]: E1031 17:13:28.110002    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.091745  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3117]: E1031 17:13:28.857456    3117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.092141  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:29 kubernetes-upgrade-171032 kubelet[3127]: E1031 17:13:29.606648    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.092498  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:30 kubernetes-upgrade-171032 kubelet[3138]: E1031 17:13:30.356492    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.092844  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.093192  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.093542  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.093904  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.094268  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.094728  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3339]: E1031 17:13:34.857484    3339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.095303  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:35 kubernetes-upgrade-171032 kubelet[3350]: E1031 17:13:35.607438    3350 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.095866  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:36 kubernetes-upgrade-171032 kubelet[3361]: E1031 17:13:36.357949    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.096337  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3374]: E1031 17:13:37.106348    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.096694  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3386]: E1031 17:13:37.858051    3386 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.097050  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:38 kubernetes-upgrade-171032 kubelet[3397]: E1031 17:13:38.611565    3397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.097422  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:39 kubernetes-upgrade-171032 kubelet[3408]: E1031 17:13:39.358596    3408 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.097789  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3419]: E1031 17:13:40.108344    3419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.098139  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3430]: E1031 17:13:40.857942    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.098486  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.098844  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.099188  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.099546  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.099930  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:45.100155  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:45.100176  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:45.100352  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:45.100383  190637 out.go:239]   Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.100392  190637 out.go:239]   Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.100406  190637 out.go:239]   Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.100418  190637 out.go:239]   Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:45.100425  190637 out.go:239]   Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:45.100438  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:45.100446  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:13:55.102227  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:13:55.159461  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:13:55.159542  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:13:55.187457  190637 cri.go:87] found id: ""
	I1031 17:13:55.187481  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.187493  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:13:55.187500  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:13:55.187557  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:13:55.213285  190637 cri.go:87] found id: ""
	I1031 17:13:55.213309  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.213326  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:13:55.213333  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:13:55.213384  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:13:55.238028  190637 cri.go:87] found id: ""
	I1031 17:13:55.238053  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.238059  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:13:55.238064  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:13:55.238106  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:13:55.263386  190637 cri.go:87] found id: ""
	I1031 17:13:55.263414  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.263421  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:13:55.263430  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:13:55.263479  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:13:55.289591  190637 cri.go:87] found id: ""
	I1031 17:13:55.289622  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.289632  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:13:55.289640  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:13:55.289684  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:13:55.314367  190637 cri.go:87] found id: ""
	I1031 17:13:55.314408  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.314415  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:13:55.314421  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:13:55.314464  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:13:55.339332  190637 cri.go:87] found id: ""
	I1031 17:13:55.339363  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.339369  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:13:55.339375  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:13:55.339433  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:13:55.364449  190637 cri.go:87] found id: ""
	I1031 17:13:55.364481  190637 logs.go:274] 0 containers: []
	W1031 17:13:55.364491  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:13:55.364500  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:13:55.364513  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:13:55.381439  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:05 kubernetes-upgrade-171032 kubelet[2503]: E1031 17:13:05.616022    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.381812  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:06 kubernetes-upgrade-171032 kubelet[2514]: E1031 17:13:06.357325    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.382240  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2524]: E1031 17:13:07.108543    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.382827  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:07 kubernetes-upgrade-171032 kubelet[2535]: E1031 17:13:07.859345    2535 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.383461  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:08 kubernetes-upgrade-171032 kubelet[2546]: E1031 17:13:08.610442    2546 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.384048  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:09 kubernetes-upgrade-171032 kubelet[2557]: E1031 17:13:09.357591    2557 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.384659  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2568]: E1031 17:13:10.118835    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.385250  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:10 kubernetes-upgrade-171032 kubelet[2579]: E1031 17:13:10.859118    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.385779  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:11 kubernetes-upgrade-171032 kubelet[2591]: E1031 17:13:11.608449    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.386141  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:12 kubernetes-upgrade-171032 kubelet[2602]: E1031 17:13:12.357241    2602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.386515  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2613]: E1031 17:13:13.110342    2613 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.386892  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:13 kubernetes-upgrade-171032 kubelet[2762]: E1031 17:13:13.856057    2762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.387277  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:14 kubernetes-upgrade-171032 kubelet[2773]: E1031 17:13:14.610703    2773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.387649  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:15 kubernetes-upgrade-171032 kubelet[2784]: E1031 17:13:15.364002    2784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.388013  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2795]: E1031 17:13:16.110911    2795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.388513  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2806]: E1031 17:13:16.859668    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.388882  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:17 kubernetes-upgrade-171032 kubelet[2817]: E1031 17:13:17.607219    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.389252  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:18 kubernetes-upgrade-171032 kubelet[2828]: E1031 17:13:18.357740    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.389622  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2839]: E1031 17:13:19.108336    2839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.389989  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2850]: E1031 17:13:19.857866    2850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.390351  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.390723  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.391113  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.391487  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.391852  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.392250  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:24 kubernetes-upgrade-171032 kubelet[3049]: E1031 17:13:24.392205    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.392619  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3060]: E1031 17:13:25.108044    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.392982  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3071]: E1031 17:13:25.859282    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.393355  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:26 kubernetes-upgrade-171032 kubelet[3083]: E1031 17:13:26.610036    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.393742  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:27 kubernetes-upgrade-171032 kubelet[3094]: E1031 17:13:27.357276    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.394130  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3105]: E1031 17:13:28.110002    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.394502  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3117]: E1031 17:13:28.857456    3117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.394885  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:29 kubernetes-upgrade-171032 kubelet[3127]: E1031 17:13:29.606648    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.395252  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:30 kubernetes-upgrade-171032 kubelet[3138]: E1031 17:13:30.356492    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.395614  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.395982  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.396535  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.397063  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.397463  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.397844  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3339]: E1031 17:13:34.857484    3339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.398224  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:35 kubernetes-upgrade-171032 kubelet[3350]: E1031 17:13:35.607438    3350 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.398585  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:36 kubernetes-upgrade-171032 kubelet[3361]: E1031 17:13:36.357949    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.398959  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3374]: E1031 17:13:37.106348    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.399325  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3386]: E1031 17:13:37.858051    3386 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.399698  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:38 kubernetes-upgrade-171032 kubelet[3397]: E1031 17:13:38.611565    3397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.400063  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:39 kubernetes-upgrade-171032 kubelet[3408]: E1031 17:13:39.358596    3408 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.400444  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3419]: E1031 17:13:40.108344    3419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.400807  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3430]: E1031 17:13:40.857942    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.401173  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.401544  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.401915  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.402286  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.402665  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.403039  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:45 kubernetes-upgrade-171032 kubelet[3636]: E1031 17:13:45.363684    3636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.403405  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3647]: E1031 17:13:46.108210    3647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.403770  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3658]: E1031 17:13:46.856684    3658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.404161  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:47 kubernetes-upgrade-171032 kubelet[3670]: E1031 17:13:47.608777    3670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.404530  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:48 kubernetes-upgrade-171032 kubelet[3681]: E1031 17:13:48.359138    3681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.404887  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3692]: E1031 17:13:49.108654    3692 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.405259  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3703]: E1031 17:13:49.861089    3703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.405619  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:50 kubernetes-upgrade-171032 kubelet[3714]: E1031 17:13:50.607614    3714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.405985  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:51 kubernetes-upgrade-171032 kubelet[3725]: E1031 17:13:51.359986    3725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.406372  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.406744  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.407107  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.407491  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.407885  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:55.408029  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:13:55.408051  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:13:55.424433  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:13:55.424465  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:13:55.483284  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:13:55.483302  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:13:55.483312  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:13:55.518704  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:13:55.518744  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:13:55.548565  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:55.548589  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:13:55.548701  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:13:55.548718  190637 out.go:239]   Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.548726  190637 out.go:239]   Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.548738  190637 out.go:239]   Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.548746  190637 out.go:239]   Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:13:55.548753  190637 out.go:239]   Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:13:55.548760  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:13:55.548774  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:05.549865  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:05.659181  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:05.659248  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:05.684125  190637 cri.go:87] found id: ""
	I1031 17:14:05.684154  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.684164  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:05.684172  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:05.684230  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:05.715934  190637 cri.go:87] found id: ""
	I1031 17:14:05.715957  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.715963  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:05.715969  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:05.716008  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:05.740943  190637 cri.go:87] found id: ""
	I1031 17:14:05.740974  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.740983  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:05.740991  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:05.741044  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:05.767382  190637 cri.go:87] found id: ""
	I1031 17:14:05.767412  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.767425  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:05.767433  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:05.767486  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:05.799807  190637 cri.go:87] found id: ""
	I1031 17:14:05.799838  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.799847  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:05.799856  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:05.799971  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:05.827612  190637 cri.go:87] found id: ""
	I1031 17:14:05.827636  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.827645  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:05.827653  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:05.827702  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:05.853083  190637 cri.go:87] found id: ""
	I1031 17:14:05.853119  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.853129  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:05.853139  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:05.853197  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:05.882099  190637 cri.go:87] found id: ""
	I1031 17:14:05.882125  190637 logs.go:274] 0 containers: []
	W1031 17:14:05.882135  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:05.882146  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:05.882159  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:05.903938  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2795]: E1031 17:13:16.110911    2795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.904362  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:16 kubernetes-upgrade-171032 kubelet[2806]: E1031 17:13:16.859668    2806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.904722  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:17 kubernetes-upgrade-171032 kubelet[2817]: E1031 17:13:17.607219    2817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.905088  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:18 kubernetes-upgrade-171032 kubelet[2828]: E1031 17:13:18.357740    2828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.905446  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2839]: E1031 17:13:19.108336    2839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.905811  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:19 kubernetes-upgrade-171032 kubelet[2850]: E1031 17:13:19.857866    2850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.906158  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:20 kubernetes-upgrade-171032 kubelet[2860]: E1031 17:13:20.608606    2860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.906520  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:21 kubernetes-upgrade-171032 kubelet[2872]: E1031 17:13:21.357964    2872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.906891  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2882]: E1031 17:13:22.108680    2882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.907255  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:22 kubernetes-upgrade-171032 kubelet[2893]: E1031 17:13:22.868814    2893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.907616  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:23 kubernetes-upgrade-171032 kubelet[2906]: E1031 17:13:23.615922    2906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.908004  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:24 kubernetes-upgrade-171032 kubelet[3049]: E1031 17:13:24.392205    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.908400  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3060]: E1031 17:13:25.108044    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.908758  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:25 kubernetes-upgrade-171032 kubelet[3071]: E1031 17:13:25.859282    3071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.909112  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:26 kubernetes-upgrade-171032 kubelet[3083]: E1031 17:13:26.610036    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.909473  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:27 kubernetes-upgrade-171032 kubelet[3094]: E1031 17:13:27.357276    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.909828  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3105]: E1031 17:13:28.110002    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.910213  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3117]: E1031 17:13:28.857456    3117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.910573  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:29 kubernetes-upgrade-171032 kubelet[3127]: E1031 17:13:29.606648    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.910940  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:30 kubernetes-upgrade-171032 kubelet[3138]: E1031 17:13:30.356492    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.911356  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.911713  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.912107  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.912467  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.912821  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.913220  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3339]: E1031 17:13:34.857484    3339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.913581  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:35 kubernetes-upgrade-171032 kubelet[3350]: E1031 17:13:35.607438    3350 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.913932  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:36 kubernetes-upgrade-171032 kubelet[3361]: E1031 17:13:36.357949    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.914280  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3374]: E1031 17:13:37.106348    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.914638  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3386]: E1031 17:13:37.858051    3386 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.914994  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:38 kubernetes-upgrade-171032 kubelet[3397]: E1031 17:13:38.611565    3397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.915348  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:39 kubernetes-upgrade-171032 kubelet[3408]: E1031 17:13:39.358596    3408 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.915708  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3419]: E1031 17:13:40.108344    3419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.916122  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3430]: E1031 17:13:40.857942    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.916512  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.916861  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.917237  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.917593  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.917949  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.918320  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:45 kubernetes-upgrade-171032 kubelet[3636]: E1031 17:13:45.363684    3636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.918671  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3647]: E1031 17:13:46.108210    3647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.919044  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3658]: E1031 17:13:46.856684    3658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.919414  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:47 kubernetes-upgrade-171032 kubelet[3670]: E1031 17:13:47.608777    3670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.919771  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:48 kubernetes-upgrade-171032 kubelet[3681]: E1031 17:13:48.359138    3681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.920151  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3692]: E1031 17:13:49.108654    3692 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.920509  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3703]: E1031 17:13:49.861089    3703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.920866  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:50 kubernetes-upgrade-171032 kubelet[3714]: E1031 17:13:50.607614    3714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.921218  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:51 kubernetes-upgrade-171032 kubelet[3725]: E1031 17:13:51.359986    3725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.921578  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.921936  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.922297  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.922655  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.923015  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.923372  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3932]: E1031 17:13:55.858734    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.923721  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:56 kubernetes-upgrade-171032 kubelet[3943]: E1031 17:13:56.610796    3943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.924100  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:57 kubernetes-upgrade-171032 kubelet[3954]: E1031 17:13:57.356768    3954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.924458  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3965]: E1031 17:13:58.106778    3965 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.924862  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3976]: E1031 17:13:58.860827    3976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.925219  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:59 kubernetes-upgrade-171032 kubelet[3987]: E1031 17:13:59.607326    3987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.925576  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:00 kubernetes-upgrade-171032 kubelet[3998]: E1031 17:14:00.359061    3998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.925940  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4010]: E1031 17:14:01.110891    4010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.926288  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4020]: E1031 17:14:01.860038    4020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.926645  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.927006  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.927366  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.927726  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:05.928128  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:05.928261  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:05.928277  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:05.944434  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:05.944464  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:06.016460  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:06.016486  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:06.016495  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:06.053679  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:06.053721  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:06.080886  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:06.080910  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:06.081022  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:06.081043  190637 out.go:239]   Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:06.081051  190637 out.go:239]   Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:06.081059  190637 out.go:239]   Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:06.081066  190637 out.go:239]   Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:06.081079  190637 out.go:239]   Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:06.081090  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:06.081103  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:16.082747  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:16.159382  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:16.159488  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:16.188096  190637 cri.go:87] found id: ""
	I1031 17:14:16.188143  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.188151  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:16.188159  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:16.188212  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:16.214554  190637 cri.go:87] found id: ""
	I1031 17:14:16.214582  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.214590  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:16.214598  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:16.214643  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:16.240340  190637 cri.go:87] found id: ""
	I1031 17:14:16.240371  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.240378  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:16.240384  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:16.240459  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:16.269553  190637 cri.go:87] found id: ""
	I1031 17:14:16.269583  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.269591  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:16.269603  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:16.269658  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:16.296966  190637 cri.go:87] found id: ""
	I1031 17:14:16.297001  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.297010  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:16.297018  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:16.297074  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:16.324374  190637 cri.go:87] found id: ""
	I1031 17:14:16.324400  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.324409  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:16.324417  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:16.324471  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:16.351439  190637 cri.go:87] found id: ""
	I1031 17:14:16.351475  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.351485  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:16.351493  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:16.351555  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:16.380443  190637 cri.go:87] found id: ""
	I1031 17:14:16.380471  190637 logs.go:274] 0 containers: []
	W1031 17:14:16.380479  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:16.380491  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:16.380509  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:16.400760  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:26 kubernetes-upgrade-171032 kubelet[3083]: E1031 17:13:26.610036    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.401416  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:27 kubernetes-upgrade-171032 kubelet[3094]: E1031 17:13:27.357276    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.402036  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3105]: E1031 17:13:28.110002    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.402690  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:28 kubernetes-upgrade-171032 kubelet[3117]: E1031 17:13:28.857456    3117 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.403424  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:29 kubernetes-upgrade-171032 kubelet[3127]: E1031 17:13:29.606648    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.404205  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:30 kubernetes-upgrade-171032 kubelet[3138]: E1031 17:13:30.356492    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.404885  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3149]: E1031 17:13:31.108742    3149 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.405535  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:31 kubernetes-upgrade-171032 kubelet[3160]: E1031 17:13:31.857364    3160 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.406180  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:32 kubernetes-upgrade-171032 kubelet[3171]: E1031 17:13:32.608053    3171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.406886  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:33 kubernetes-upgrade-171032 kubelet[3182]: E1031 17:13:33.355855    3182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.407634  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3193]: E1031 17:13:34.110529    3193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.408278  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:34 kubernetes-upgrade-171032 kubelet[3339]: E1031 17:13:34.857484    3339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.408926  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:35 kubernetes-upgrade-171032 kubelet[3350]: E1031 17:13:35.607438    3350 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.409529  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:36 kubernetes-upgrade-171032 kubelet[3361]: E1031 17:13:36.357949    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.410147  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3374]: E1031 17:13:37.106348    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.410768  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3386]: E1031 17:13:37.858051    3386 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.411390  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:38 kubernetes-upgrade-171032 kubelet[3397]: E1031 17:13:38.611565    3397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.411998  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:39 kubernetes-upgrade-171032 kubelet[3408]: E1031 17:13:39.358596    3408 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.412726  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3419]: E1031 17:13:40.108344    3419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.413197  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3430]: E1031 17:13:40.857942    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.413841  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.414382  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.414851  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.415284  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.415752  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.416175  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:45 kubernetes-upgrade-171032 kubelet[3636]: E1031 17:13:45.363684    3636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.416574  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3647]: E1031 17:13:46.108210    3647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.417001  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3658]: E1031 17:13:46.856684    3658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.417452  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:47 kubernetes-upgrade-171032 kubelet[3670]: E1031 17:13:47.608777    3670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.417912  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:48 kubernetes-upgrade-171032 kubelet[3681]: E1031 17:13:48.359138    3681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.418353  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3692]: E1031 17:13:49.108654    3692 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.418882  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3703]: E1031 17:13:49.861089    3703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.419506  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:50 kubernetes-upgrade-171032 kubelet[3714]: E1031 17:13:50.607614    3714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.420158  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:51 kubernetes-upgrade-171032 kubelet[3725]: E1031 17:13:51.359986    3725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.420777  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.421420  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.422017  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.422604  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.423285  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.423796  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3932]: E1031 17:13:55.858734    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.424221  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:56 kubernetes-upgrade-171032 kubelet[3943]: E1031 17:13:56.610796    3943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.424594  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:57 kubernetes-upgrade-171032 kubelet[3954]: E1031 17:13:57.356768    3954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.425109  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3965]: E1031 17:13:58.106778    3965 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.425754  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3976]: E1031 17:13:58.860827    3976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.426371  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:59 kubernetes-upgrade-171032 kubelet[3987]: E1031 17:13:59.607326    3987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.426918  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:00 kubernetes-upgrade-171032 kubelet[3998]: E1031 17:14:00.359061    3998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.427515  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4010]: E1031 17:14:01.110891    4010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.428059  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4020]: E1031 17:14:01.860038    4020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.428675  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.429248  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.430019  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.430557  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.431192  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.431820  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:06 kubernetes-upgrade-171032 kubelet[4225]: E1031 17:14:06.357211    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.432470  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4237]: E1031 17:14:07.117446    4237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.433069  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4248]: E1031 17:14:07.864146    4248 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.433681  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:08 kubernetes-upgrade-171032 kubelet[4259]: E1031 17:14:08.610172    4259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.434274  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:09 kubernetes-upgrade-171032 kubelet[4270]: E1031 17:14:09.366396    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.434858  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4281]: E1031 17:14:10.109468    4281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.435461  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4292]: E1031 17:14:10.868702    4292 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.436058  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:11 kubernetes-upgrade-171032 kubelet[4303]: E1031 17:14:11.622760    4303 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.436656  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:12 kubernetes-upgrade-171032 kubelet[4313]: E1031 17:14:12.363613    4313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.437263  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.437668  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.438181  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.438836  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.439477  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:16.439718  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:16.439738  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:16.471999  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:16.472047  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:16.541707  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:16.541730  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:16.541739  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:16.578577  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:16.578612  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:16.610114  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:16.610139  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:16.610255  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:16.610279  190637 out.go:239]   Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.610294  190637 out.go:239]   Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.610306  190637 out.go:239]   Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.610314  190637 out.go:239]   Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:16.610326  190637 out.go:239]   Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:16.610336  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:16.610344  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:26.611965  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:26.658860  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:26.658948  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:26.689039  190637 cri.go:87] found id: ""
	I1031 17:14:26.689072  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.689082  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:26.689090  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:26.689133  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:26.721335  190637 cri.go:87] found id: ""
	I1031 17:14:26.721364  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.721372  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:26.721379  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:26.721452  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:26.758740  190637 cri.go:87] found id: ""
	I1031 17:14:26.758769  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.758777  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:26.758789  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:26.758841  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:26.789607  190637 cri.go:87] found id: ""
	I1031 17:14:26.789633  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.789639  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:26.789646  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:26.789695  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:26.818533  190637 cri.go:87] found id: ""
	I1031 17:14:26.818572  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.818581  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:26.818588  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:26.818639  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:26.842572  190637 cri.go:87] found id: ""
	I1031 17:14:26.842603  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.842612  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:26.842620  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:26.842662  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:26.867153  190637 cri.go:87] found id: ""
	I1031 17:14:26.867184  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.867194  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:26.867202  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:26.867252  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:26.893547  190637 cri.go:87] found id: ""
	I1031 17:14:26.893573  190637 logs.go:274] 0 containers: []
	W1031 17:14:26.893581  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:26.893592  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:26.893608  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:26.932992  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:26.933031  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:26.962385  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:26.962421  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:26.980611  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3374]: E1031 17:13:37.106348    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.980966  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:37 kubernetes-upgrade-171032 kubelet[3386]: E1031 17:13:37.858051    3386 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.981305  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:38 kubernetes-upgrade-171032 kubelet[3397]: E1031 17:13:38.611565    3397 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.981656  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:39 kubernetes-upgrade-171032 kubelet[3408]: E1031 17:13:39.358596    3408 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.982130  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3419]: E1031 17:13:40.108344    3419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.982735  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:40 kubernetes-upgrade-171032 kubelet[3430]: E1031 17:13:40.857942    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.983150  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:41 kubernetes-upgrade-171032 kubelet[3442]: E1031 17:13:41.610478    3442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.983511  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:42 kubernetes-upgrade-171032 kubelet[3453]: E1031 17:13:42.357248    3453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.983864  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3464]: E1031 17:13:43.108008    3464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.984301  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:43 kubernetes-upgrade-171032 kubelet[3475]: E1031 17:13:43.858806    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.984686  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:44 kubernetes-upgrade-171032 kubelet[3489]: E1031 17:13:44.614246    3489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.985042  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:45 kubernetes-upgrade-171032 kubelet[3636]: E1031 17:13:45.363684    3636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.985399  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3647]: E1031 17:13:46.108210    3647 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.985746  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:46 kubernetes-upgrade-171032 kubelet[3658]: E1031 17:13:46.856684    3658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.986086  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:47 kubernetes-upgrade-171032 kubelet[3670]: E1031 17:13:47.608777    3670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.986440  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:48 kubernetes-upgrade-171032 kubelet[3681]: E1031 17:13:48.359138    3681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.986784  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3692]: E1031 17:13:49.108654    3692 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.987127  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3703]: E1031 17:13:49.861089    3703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.987506  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:50 kubernetes-upgrade-171032 kubelet[3714]: E1031 17:13:50.607614    3714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.987923  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:51 kubernetes-upgrade-171032 kubelet[3725]: E1031 17:13:51.359986    3725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.988392  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.988956  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.989570  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.990074  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.990453  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.990811  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3932]: E1031 17:13:55.858734    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.991152  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:56 kubernetes-upgrade-171032 kubelet[3943]: E1031 17:13:56.610796    3943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.991504  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:57 kubernetes-upgrade-171032 kubelet[3954]: E1031 17:13:57.356768    3954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.991933  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3965]: E1031 17:13:58.106778    3965 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.992491  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3976]: E1031 17:13:58.860827    3976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.992839  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:59 kubernetes-upgrade-171032 kubelet[3987]: E1031 17:13:59.607326    3987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.993199  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:00 kubernetes-upgrade-171032 kubelet[3998]: E1031 17:14:00.359061    3998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.993559  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4010]: E1031 17:14:01.110891    4010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.993906  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4020]: E1031 17:14:01.860038    4020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.994279  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.994679  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.995027  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.995406  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.995958  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.996423  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:06 kubernetes-upgrade-171032 kubelet[4225]: E1031 17:14:06.357211    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.996821  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4237]: E1031 17:14:07.117446    4237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.997282  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4248]: E1031 17:14:07.864146    4248 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.997652  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:08 kubernetes-upgrade-171032 kubelet[4259]: E1031 17:14:08.610172    4259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.998043  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:09 kubernetes-upgrade-171032 kubelet[4270]: E1031 17:14:09.366396    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.998594  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4281]: E1031 17:14:10.109468    4281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.999205  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4292]: E1031 17:14:10.868702    4292 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:26.999781  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:11 kubernetes-upgrade-171032 kubelet[4303]: E1031 17:14:11.622760    4303 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.000305  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:12 kubernetes-upgrade-171032 kubelet[4313]: E1031 17:14:12.363613    4313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.000673  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.001016  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.001366  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.001723  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.002073  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.002462  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4515]: E1031 17:14:16.864545    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.002810  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:17 kubernetes-upgrade-171032 kubelet[4526]: E1031 17:14:17.624834    4526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.003187  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:18 kubernetes-upgrade-171032 kubelet[4537]: E1031 17:14:18.359221    4537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.003538  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4548]: E1031 17:14:19.116675    4548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.003928  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4559]: E1031 17:14:19.870756    4559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.004301  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:20 kubernetes-upgrade-171032 kubelet[4570]: E1031 17:14:20.618731    4570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.004753  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:21 kubernetes-upgrade-171032 kubelet[4582]: E1031 17:14:21.361163    4582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.005241  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4593]: E1031 17:14:22.113798    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.005680  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4604]: E1031 17:14:22.870832    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.006094  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.006473  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.006844  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.007232  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.007681  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:27.007869  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:27.007887  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:27.025495  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:27.025533  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:27.106421  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:27.106453  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:27.106465  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:27.106619  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:27.106636  190637 out.go:239]   Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.106648  190637 out.go:239]   Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.106665  190637 out.go:239]   Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.106671  190637 out.go:239]   Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:27.106683  190637 out.go:239]   Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:27.106688  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:27.106696  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:37.107827  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:37.159086  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:37.159150  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:37.183955  190637 cri.go:87] found id: ""
	I1031 17:14:37.183987  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.183996  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:37.184002  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:37.184046  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:37.208492  190637 cri.go:87] found id: ""
	I1031 17:14:37.208519  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.208527  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:37.208535  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:37.208588  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:37.232431  190637 cri.go:87] found id: ""
	I1031 17:14:37.232460  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.232467  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:37.232472  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:37.232513  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:37.255922  190637 cri.go:87] found id: ""
	I1031 17:14:37.255954  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.255961  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:37.255967  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:37.256010  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:37.280358  190637 cri.go:87] found id: ""
	I1031 17:14:37.280387  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.280396  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:37.280404  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:37.280457  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:37.304213  190637 cri.go:87] found id: ""
	I1031 17:14:37.304237  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.304245  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:37.304252  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:37.304304  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:37.327820  190637 cri.go:87] found id: ""
	I1031 17:14:37.327846  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.327857  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:37.327862  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:37.327918  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:37.351838  190637 cri.go:87] found id: ""
	I1031 17:14:37.351862  190637 logs.go:274] 0 containers: []
	W1031 17:14:37.351868  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:37.351878  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:37.351889  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:37.369021  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:37.369061  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:37.424804  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:37.424824  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:37.424833  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:37.460892  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:37.460923  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:37.488300  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:37.488329  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:37.506626  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:47 kubernetes-upgrade-171032 kubelet[3670]: E1031 17:13:47.608777    3670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.507220  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:48 kubernetes-upgrade-171032 kubelet[3681]: E1031 17:13:48.359138    3681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.507794  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3692]: E1031 17:13:49.108654    3692 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.508206  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:49 kubernetes-upgrade-171032 kubelet[3703]: E1031 17:13:49.861089    3703 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.508558  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:50 kubernetes-upgrade-171032 kubelet[3714]: E1031 17:13:50.607614    3714 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.508932  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:51 kubernetes-upgrade-171032 kubelet[3725]: E1031 17:13:51.359986    3725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.509282  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3736]: E1031 17:13:52.107509    3736 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.509634  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:52 kubernetes-upgrade-171032 kubelet[3746]: E1031 17:13:52.857948    3746 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.510016  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:53 kubernetes-upgrade-171032 kubelet[3757]: E1031 17:13:53.607208    3757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.510364  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:54 kubernetes-upgrade-171032 kubelet[3768]: E1031 17:13:54.359186    3768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.510711  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3780]: E1031 17:13:55.111370    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.511064  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:55 kubernetes-upgrade-171032 kubelet[3932]: E1031 17:13:55.858734    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.511406  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:56 kubernetes-upgrade-171032 kubelet[3943]: E1031 17:13:56.610796    3943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.511786  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:57 kubernetes-upgrade-171032 kubelet[3954]: E1031 17:13:57.356768    3954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.512194  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3965]: E1031 17:13:58.106778    3965 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.512548  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3976]: E1031 17:13:58.860827    3976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.512894  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:59 kubernetes-upgrade-171032 kubelet[3987]: E1031 17:13:59.607326    3987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.513243  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:00 kubernetes-upgrade-171032 kubelet[3998]: E1031 17:14:00.359061    3998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.513588  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4010]: E1031 17:14:01.110891    4010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.513945  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4020]: E1031 17:14:01.860038    4020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.514284  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.514636  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.515023  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.515371  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.515722  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.516084  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:06 kubernetes-upgrade-171032 kubelet[4225]: E1031 17:14:06.357211    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.516439  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4237]: E1031 17:14:07.117446    4237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.516799  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4248]: E1031 17:14:07.864146    4248 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.517156  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:08 kubernetes-upgrade-171032 kubelet[4259]: E1031 17:14:08.610172    4259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.517508  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:09 kubernetes-upgrade-171032 kubelet[4270]: E1031 17:14:09.366396    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.517860  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4281]: E1031 17:14:10.109468    4281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.518209  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4292]: E1031 17:14:10.868702    4292 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.518577  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:11 kubernetes-upgrade-171032 kubelet[4303]: E1031 17:14:11.622760    4303 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.518933  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:12 kubernetes-upgrade-171032 kubelet[4313]: E1031 17:14:12.363613    4313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.519274  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.519633  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.519978  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.520350  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.520705  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.521059  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4515]: E1031 17:14:16.864545    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.521411  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:17 kubernetes-upgrade-171032 kubelet[4526]: E1031 17:14:17.624834    4526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.521766  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:18 kubernetes-upgrade-171032 kubelet[4537]: E1031 17:14:18.359221    4537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.522124  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4548]: E1031 17:14:19.116675    4548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.522476  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4559]: E1031 17:14:19.870756    4559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.522826  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:20 kubernetes-upgrade-171032 kubelet[4570]: E1031 17:14:20.618731    4570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.523186  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:21 kubernetes-upgrade-171032 kubelet[4582]: E1031 17:14:21.361163    4582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.523527  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4593]: E1031 17:14:22.113798    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.523876  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4604]: E1031 17:14:22.870832    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.524235  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.524589  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.524936  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.525279  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.525629  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.525997  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:27 kubernetes-upgrade-171032 kubelet[4807]: E1031 17:14:27.362807    4807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.526335  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4817]: E1031 17:14:28.108317    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.526676  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4828]: E1031 17:14:28.863297    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.527022  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:29 kubernetes-upgrade-171032 kubelet[4839]: E1031 17:14:29.619971    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.527375  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:30 kubernetes-upgrade-171032 kubelet[4849]: E1031 17:14:30.356851    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.527716  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4860]: E1031 17:14:31.105768    4860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.528075  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4871]: E1031 17:14:31.858039    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.528507  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:32 kubernetes-upgrade-171032 kubelet[4882]: E1031 17:14:32.605767    4882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.529064  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:33 kubernetes-upgrade-171032 kubelet[4893]: E1031 17:14:33.356209    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.529556  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.529941  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.530354  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.530701  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.531104  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:37.531227  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:37.531247  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:37.531371  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:37.531390  190637 out.go:239]   Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.531397  190637 out.go:239]   Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.531404  190637 out.go:239]   Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.531411  190637 out.go:239]   Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:37.531424  190637 out.go:239]   Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:37.531429  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:37.531437  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:47.532257  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:47.658826  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:47.658898  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:47.683780  190637 cri.go:87] found id: ""
	I1031 17:14:47.683816  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.683825  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:47.683834  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:47.683892  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:47.709461  190637 cri.go:87] found id: ""
	I1031 17:14:47.709488  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.709497  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:47.709504  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:47.709581  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:47.740287  190637 cri.go:87] found id: ""
	I1031 17:14:47.740316  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.740325  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:47.740332  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:47.740388  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:47.766414  190637 cri.go:87] found id: ""
	I1031 17:14:47.766447  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.766455  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:47.766461  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:47.766511  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:47.791638  190637 cri.go:87] found id: ""
	I1031 17:14:47.791669  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.791679  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:47.791687  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:47.791743  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:47.817692  190637 cri.go:87] found id: ""
	I1031 17:14:47.817725  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.817737  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:47.817744  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:47.817843  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:47.842057  190637 cri.go:87] found id: ""
	I1031 17:14:47.842087  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.842095  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:47.842103  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:47.842156  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:47.868158  190637 cri.go:87] found id: ""
	I1031 17:14:47.868191  190637 logs.go:274] 0 containers: []
	W1031 17:14:47.868199  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:47.868210  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:47.868222  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:47.884807  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3965]: E1031 17:13:58.106778    3965 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.885416  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:58 kubernetes-upgrade-171032 kubelet[3976]: E1031 17:13:58.860827    3976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.885845  190637 logs.go:138] Found kubelet problem: Oct 31 17:13:59 kubernetes-upgrade-171032 kubelet[3987]: E1031 17:13:59.607326    3987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.886269  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:00 kubernetes-upgrade-171032 kubelet[3998]: E1031 17:14:00.359061    3998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.886683  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4010]: E1031 17:14:01.110891    4010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.887056  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:01 kubernetes-upgrade-171032 kubelet[4020]: E1031 17:14:01.860038    4020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.887438  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:02 kubernetes-upgrade-171032 kubelet[4033]: E1031 17:14:02.616441    4033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.887814  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:03 kubernetes-upgrade-171032 kubelet[4043]: E1031 17:14:03.366329    4043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.888260  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4055]: E1031 17:14:04.114443    4055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.888681  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:04 kubernetes-upgrade-171032 kubelet[4067]: E1031 17:14:04.861115    4067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.889057  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:05 kubernetes-upgrade-171032 kubelet[4081]: E1031 17:14:05.612774    4081 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.889445  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:06 kubernetes-upgrade-171032 kubelet[4225]: E1031 17:14:06.357211    4225 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.889827  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4237]: E1031 17:14:07.117446    4237 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.890203  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:07 kubernetes-upgrade-171032 kubelet[4248]: E1031 17:14:07.864146    4248 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.890585  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:08 kubernetes-upgrade-171032 kubelet[4259]: E1031 17:14:08.610172    4259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.890962  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:09 kubernetes-upgrade-171032 kubelet[4270]: E1031 17:14:09.366396    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.891338  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4281]: E1031 17:14:10.109468    4281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.891768  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4292]: E1031 17:14:10.868702    4292 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.892296  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:11 kubernetes-upgrade-171032 kubelet[4303]: E1031 17:14:11.622760    4303 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.892718  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:12 kubernetes-upgrade-171032 kubelet[4313]: E1031 17:14:12.363613    4313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.893098  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.893479  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.893867  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.894243  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.894742  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.895264  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4515]: E1031 17:14:16.864545    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.895838  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:17 kubernetes-upgrade-171032 kubelet[4526]: E1031 17:14:17.624834    4526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.896446  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:18 kubernetes-upgrade-171032 kubelet[4537]: E1031 17:14:18.359221    4537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.897039  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4548]: E1031 17:14:19.116675    4548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.897636  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4559]: E1031 17:14:19.870756    4559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.898214  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:20 kubernetes-upgrade-171032 kubelet[4570]: E1031 17:14:20.618731    4570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.898803  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:21 kubernetes-upgrade-171032 kubelet[4582]: E1031 17:14:21.361163    4582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.899324  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4593]: E1031 17:14:22.113798    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.899740  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4604]: E1031 17:14:22.870832    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.900139  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.900491  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.900855  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.901204  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.901553  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.901906  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:27 kubernetes-upgrade-171032 kubelet[4807]: E1031 17:14:27.362807    4807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.902254  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4817]: E1031 17:14:28.108317    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.902613  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4828]: E1031 17:14:28.863297    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.902960  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:29 kubernetes-upgrade-171032 kubelet[4839]: E1031 17:14:29.619971    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.903322  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:30 kubernetes-upgrade-171032 kubelet[4849]: E1031 17:14:30.356851    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.903712  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4860]: E1031 17:14:31.105768    4860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.904060  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4871]: E1031 17:14:31.858039    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.904453  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:32 kubernetes-upgrade-171032 kubelet[4882]: E1031 17:14:32.605767    4882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.904829  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:33 kubernetes-upgrade-171032 kubelet[4893]: E1031 17:14:33.356209    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.905184  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.905537  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.905884  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.906234  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.906604  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.906972  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[5096]: E1031 17:14:37.858515    5096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.907323  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:38 kubernetes-upgrade-171032 kubelet[5108]: E1031 17:14:38.608993    5108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.907688  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:39 kubernetes-upgrade-171032 kubelet[5119]: E1031 17:14:39.358471    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.908099  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5130]: E1031 17:14:40.108054    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.908468  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5141]: E1031 17:14:40.857996    5141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.908823  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:41 kubernetes-upgrade-171032 kubelet[5152]: E1031 17:14:41.608805    5152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.909168  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:42 kubernetes-upgrade-171032 kubelet[5162]: E1031 17:14:42.359435    5162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.909536  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5172]: E1031 17:14:43.111495    5172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.909885  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5183]: E1031 17:14:43.862725    5183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.910252  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.910607  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.910952  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.911369  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:47.911722  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:47.911839  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:47.911855  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:47.929855  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:47.929890  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:47.989319  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:47.989346  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:47.989363  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:48.034698  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:48.034736  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:48.063552  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:48.063580  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:48.063711  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:48.063730  190637 out.go:239]   Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:48.063738  190637 out.go:239]   Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:48.063750  190637 out.go:239]   Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:48.063757  190637 out.go:239]   Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:48.063764  190637 out.go:239]   Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:48.063770  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:48.063781  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:14:58.064759  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:14:58.158914  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:14:58.159005  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:14:58.184760  190637 cri.go:87] found id: ""
	I1031 17:14:58.184785  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.184791  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:14:58.184797  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:14:58.184839  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:14:58.209448  190637 cri.go:87] found id: ""
	I1031 17:14:58.209475  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.209482  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:14:58.209488  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:14:58.209533  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:14:58.236051  190637 cri.go:87] found id: ""
	I1031 17:14:58.236132  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.236148  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:14:58.236163  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:14:58.236233  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:14:58.260209  190637 cri.go:87] found id: ""
	I1031 17:14:58.260241  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.260250  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:14:58.260257  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:14:58.260319  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:14:58.285119  190637 cri.go:87] found id: ""
	I1031 17:14:58.285151  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.285160  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:14:58.285168  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:14:58.285222  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:14:58.309266  190637 cri.go:87] found id: ""
	I1031 17:14:58.309293  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.309301  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:14:58.309310  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:14:58.309363  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:14:58.333644  190637 cri.go:87] found id: ""
	I1031 17:14:58.333675  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.333684  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:14:58.333692  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:14:58.333746  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:14:58.358583  190637 cri.go:87] found id: ""
	I1031 17:14:58.358615  190637 logs.go:274] 0 containers: []
	W1031 17:14:58.358623  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:14:58.358634  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:14:58.358645  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:14:58.378521  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:08 kubernetes-upgrade-171032 kubelet[4259]: E1031 17:14:08.610172    4259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.378916  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:09 kubernetes-upgrade-171032 kubelet[4270]: E1031 17:14:09.366396    4270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.379262  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4281]: E1031 17:14:10.109468    4281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.379661  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:10 kubernetes-upgrade-171032 kubelet[4292]: E1031 17:14:10.868702    4292 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.380010  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:11 kubernetes-upgrade-171032 kubelet[4303]: E1031 17:14:11.622760    4303 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.380450  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:12 kubernetes-upgrade-171032 kubelet[4313]: E1031 17:14:12.363613    4313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.380838  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4324]: E1031 17:14:13.121441    4324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.381193  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:13 kubernetes-upgrade-171032 kubelet[4336]: E1031 17:14:13.857630    4336 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.381542  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:14 kubernetes-upgrade-171032 kubelet[4347]: E1031 17:14:14.609492    4347 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.381928  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:15 kubernetes-upgrade-171032 kubelet[4358]: E1031 17:14:15.362601    4358 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.382283  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4369]: E1031 17:14:16.119299    4369 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.382632  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:16 kubernetes-upgrade-171032 kubelet[4515]: E1031 17:14:16.864545    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.382976  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:17 kubernetes-upgrade-171032 kubelet[4526]: E1031 17:14:17.624834    4526 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.383340  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:18 kubernetes-upgrade-171032 kubelet[4537]: E1031 17:14:18.359221    4537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.383683  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4548]: E1031 17:14:19.116675    4548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.384028  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4559]: E1031 17:14:19.870756    4559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.384452  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:20 kubernetes-upgrade-171032 kubelet[4570]: E1031 17:14:20.618731    4570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.384801  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:21 kubernetes-upgrade-171032 kubelet[4582]: E1031 17:14:21.361163    4582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.385142  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4593]: E1031 17:14:22.113798    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.385486  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4604]: E1031 17:14:22.870832    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.385826  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.386174  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.386526  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.386934  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.387294  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.387653  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:27 kubernetes-upgrade-171032 kubelet[4807]: E1031 17:14:27.362807    4807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.387996  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4817]: E1031 17:14:28.108317    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.388373  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4828]: E1031 17:14:28.863297    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.388721  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:29 kubernetes-upgrade-171032 kubelet[4839]: E1031 17:14:29.619971    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.389061  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:30 kubernetes-upgrade-171032 kubelet[4849]: E1031 17:14:30.356851    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.389413  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4860]: E1031 17:14:31.105768    4860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.389757  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4871]: E1031 17:14:31.858039    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.390108  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:32 kubernetes-upgrade-171032 kubelet[4882]: E1031 17:14:32.605767    4882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.390464  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:33 kubernetes-upgrade-171032 kubelet[4893]: E1031 17:14:33.356209    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.390818  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.391182  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.391530  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.391872  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.392253  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.392678  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[5096]: E1031 17:14:37.858515    5096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.393151  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:38 kubernetes-upgrade-171032 kubelet[5108]: E1031 17:14:38.608993    5108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.393695  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:39 kubernetes-upgrade-171032 kubelet[5119]: E1031 17:14:39.358471    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.394236  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5130]: E1031 17:14:40.108054    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.394599  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5141]: E1031 17:14:40.857996    5141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.394998  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:41 kubernetes-upgrade-171032 kubelet[5152]: E1031 17:14:41.608805    5152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.395354  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:42 kubernetes-upgrade-171032 kubelet[5162]: E1031 17:14:42.359435    5162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.395859  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5172]: E1031 17:14:43.111495    5172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.396464  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5183]: E1031 17:14:43.862725    5183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.396954  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.397433  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.397918  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.398510  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.399098  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.399645  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:48 kubernetes-upgrade-171032 kubelet[5388]: E1031 17:14:48.386710    5388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.400159  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5399]: E1031 17:14:49.112768    5399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.400586  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5410]: E1031 17:14:49.858019    5410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.400965  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:50 kubernetes-upgrade-171032 kubelet[5421]: E1031 17:14:50.608344    5421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.401340  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:51 kubernetes-upgrade-171032 kubelet[5432]: E1031 17:14:51.366721    5432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.401886  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5444]: E1031 17:14:52.113463    5444 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.402535  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5455]: E1031 17:14:52.857207    5455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.403080  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:53 kubernetes-upgrade-171032 kubelet[5466]: E1031 17:14:53.606136    5466 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.403565  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:54 kubernetes-upgrade-171032 kubelet[5477]: E1031 17:14:54.357248    5477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.404140  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.404647  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.405080  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.405485  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.405896  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:58.406038  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:14:58.406056  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:14:58.424799  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:14:58.424834  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:14:58.485100  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:14:58.485128  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:14:58.485140  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:14:58.520883  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:14:58.520916  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:14:58.548032  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:58.548061  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:14:58.548236  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:14:58.548254  190637 out.go:239]   Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.548263  190637 out.go:239]   Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.548274  190637 out.go:239]   Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.548303  190637 out.go:239]   Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:14:58.548315  190637 out.go:239]   Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:14:58.548325  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:14:58.548335  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:15:08.549006  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:15:08.659060  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:15:08.659141  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:15:08.686033  190637 cri.go:87] found id: ""
	I1031 17:15:08.686063  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.686071  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:15:08.686079  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:15:08.686144  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:15:08.712516  190637 cri.go:87] found id: ""
	I1031 17:15:08.712546  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.712556  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:15:08.712564  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:15:08.712627  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:15:08.737424  190637 cri.go:87] found id: ""
	I1031 17:15:08.737450  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.737456  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:15:08.737463  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:15:08.737519  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:15:08.762633  190637 cri.go:87] found id: ""
	I1031 17:15:08.762662  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.762668  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:15:08.762679  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:15:08.762730  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:15:08.788256  190637 cri.go:87] found id: ""
	I1031 17:15:08.788282  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.788291  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:15:08.788298  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:15:08.788352  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:15:08.814393  190637 cri.go:87] found id: ""
	I1031 17:15:08.814418  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.814426  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:15:08.814434  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:15:08.814486  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:15:08.838748  190637 cri.go:87] found id: ""
	I1031 17:15:08.838779  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.838790  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:15:08.838799  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:15:08.838849  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:15:08.863884  190637 cri.go:87] found id: ""
	I1031 17:15:08.863910  190637 logs.go:274] 0 containers: []
	W1031 17:15:08.863916  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:15:08.863924  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:15:08.863941  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:15:08.900314  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:15:08.900357  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:15:08.928495  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:15:08.928525  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:15:08.945166  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4548]: E1031 17:14:19.116675    4548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.945757  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:19 kubernetes-upgrade-171032 kubelet[4559]: E1031 17:14:19.870756    4559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.946333  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:20 kubernetes-upgrade-171032 kubelet[4570]: E1031 17:14:20.618731    4570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.946918  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:21 kubernetes-upgrade-171032 kubelet[4582]: E1031 17:14:21.361163    4582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.947499  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4593]: E1031 17:14:22.113798    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.948097  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:22 kubernetes-upgrade-171032 kubelet[4604]: E1031 17:14:22.870832    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.948774  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:23 kubernetes-upgrade-171032 kubelet[4616]: E1031 17:14:23.609420    4616 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.949230  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:24 kubernetes-upgrade-171032 kubelet[4626]: E1031 17:14:24.360595    4626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.949614  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4637]: E1031 17:14:25.112298    4637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.949992  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:25 kubernetes-upgrade-171032 kubelet[4649]: E1031 17:14:25.856672    4649 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.950393  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:26 kubernetes-upgrade-171032 kubelet[4660]: E1031 17:14:26.613677    4660 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.950773  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:27 kubernetes-upgrade-171032 kubelet[4807]: E1031 17:14:27.362807    4807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.951148  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4817]: E1031 17:14:28.108317    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.951534  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:28 kubernetes-upgrade-171032 kubelet[4828]: E1031 17:14:28.863297    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.951917  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:29 kubernetes-upgrade-171032 kubelet[4839]: E1031 17:14:29.619971    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.952343  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:30 kubernetes-upgrade-171032 kubelet[4849]: E1031 17:14:30.356851    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.952725  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4860]: E1031 17:14:31.105768    4860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.953117  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4871]: E1031 17:14:31.858039    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.953501  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:32 kubernetes-upgrade-171032 kubelet[4882]: E1031 17:14:32.605767    4882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.953880  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:33 kubernetes-upgrade-171032 kubelet[4893]: E1031 17:14:33.356209    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.954451  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.955050  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.955665  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.956282  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.956705  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.957225  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[5096]: E1031 17:14:37.858515    5096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.957795  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:38 kubernetes-upgrade-171032 kubelet[5108]: E1031 17:14:38.608993    5108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.958307  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:39 kubernetes-upgrade-171032 kubelet[5119]: E1031 17:14:39.358471    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.958764  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5130]: E1031 17:14:40.108054    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.959215  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5141]: E1031 17:14:40.857996    5141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.959602  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:41 kubernetes-upgrade-171032 kubelet[5152]: E1031 17:14:41.608805    5152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.959993  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:42 kubernetes-upgrade-171032 kubelet[5162]: E1031 17:14:42.359435    5162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.960380  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5172]: E1031 17:14:43.111495    5172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.960732  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5183]: E1031 17:14:43.862725    5183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.961096  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.961465  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.961807  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.962155  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.962503  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.962856  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:48 kubernetes-upgrade-171032 kubelet[5388]: E1031 17:14:48.386710    5388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.963221  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5399]: E1031 17:14:49.112768    5399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.963569  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5410]: E1031 17:14:49.858019    5410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.963915  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:50 kubernetes-upgrade-171032 kubelet[5421]: E1031 17:14:50.608344    5421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.964337  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:51 kubernetes-upgrade-171032 kubelet[5432]: E1031 17:14:51.366721    5432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.964687  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5444]: E1031 17:14:52.113463    5444 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.965085  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5455]: E1031 17:14:52.857207    5455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.965462  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:53 kubernetes-upgrade-171032 kubelet[5466]: E1031 17:14:53.606136    5466 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.965817  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:54 kubernetes-upgrade-171032 kubelet[5477]: E1031 17:14:54.357248    5477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.966166  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.966518  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.966860  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.967204  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.967573  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.967921  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5680]: E1031 17:14:58.858501    5680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.968295  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:59 kubernetes-upgrade-171032 kubelet[5691]: E1031 17:14:59.609000    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.968724  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:00 kubernetes-upgrade-171032 kubelet[5702]: E1031 17:15:00.357407    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.969142  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5713]: E1031 17:15:01.112374    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.969520  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5725]: E1031 17:15:01.861185    5725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.969919  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:02 kubernetes-upgrade-171032 kubelet[5737]: E1031 17:15:02.611197    5737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.970331  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:03 kubernetes-upgrade-171032 kubelet[5749]: E1031 17:15:03.359242    5749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.970733  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5760]: E1031 17:15:04.109453    5760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.971182  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5771]: E1031 17:15:04.864773    5771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.971651  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.972124  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.972513  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.972875  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:08.973250  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:08.973421  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:15:08.973438  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:15:08.991213  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:15:08.991244  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:15:09.048337  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:15:09.048362  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:09.048372  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:15:09.048490  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:15:09.048507  190637 out.go:239]   Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:09.048516  190637 out.go:239]   Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:09.048523  190637 out.go:239]   Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:09.048535  190637 out.go:239]   Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:09.048546  190637 out.go:239]   Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:09.048556  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:09.048567  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:15:19.050212  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:15:19.158968  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:15:19.159043  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:15:19.185590  190637 cri.go:87] found id: ""
	I1031 17:15:19.185614  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.185620  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:15:19.185626  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:15:19.185671  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:15:19.211820  190637 cri.go:87] found id: ""
	I1031 17:15:19.211857  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.211866  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:15:19.211874  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:15:19.211938  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:15:19.236796  190637 cri.go:87] found id: ""
	I1031 17:15:19.236828  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.236834  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:15:19.236840  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:15:19.236884  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:15:19.261438  190637 cri.go:87] found id: ""
	I1031 17:15:19.261469  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.261477  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:15:19.261484  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:15:19.261542  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:15:19.286070  190637 cri.go:87] found id: ""
	I1031 17:15:19.286101  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.286110  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:15:19.286121  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:15:19.286173  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:15:19.310858  190637 cri.go:87] found id: ""
	I1031 17:15:19.310884  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.310890  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:15:19.310896  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:15:19.310945  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:15:19.335723  190637 cri.go:87] found id: ""
	I1031 17:15:19.335756  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.335763  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:15:19.335770  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:15:19.335831  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:15:19.361531  190637 cri.go:87] found id: ""
	I1031 17:15:19.361564  190637 logs.go:274] 0 containers: []
	W1031 17:15:19.361573  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:15:19.361585  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:15:19.361599  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:15:19.381508  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:29 kubernetes-upgrade-171032 kubelet[4839]: E1031 17:14:29.619971    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.383083  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:30 kubernetes-upgrade-171032 kubelet[4849]: E1031 17:14:30.356851    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.383487  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4860]: E1031 17:14:31.105768    4860 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.383884  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:31 kubernetes-upgrade-171032 kubelet[4871]: E1031 17:14:31.858039    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.384311  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:32 kubernetes-upgrade-171032 kubelet[4882]: E1031 17:14:32.605767    4882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.384690  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:33 kubernetes-upgrade-171032 kubelet[4893]: E1031 17:14:33.356209    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.385051  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4904]: E1031 17:14:34.107237    4904 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.385418  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:34 kubernetes-upgrade-171032 kubelet[4914]: E1031 17:14:34.859241    4914 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.385782  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:35 kubernetes-upgrade-171032 kubelet[4926]: E1031 17:14:35.607922    4926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.386154  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:36 kubernetes-upgrade-171032 kubelet[4937]: E1031 17:14:36.357349    4937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.386515  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[4948]: E1031 17:14:37.105409    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.386877  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:37 kubernetes-upgrade-171032 kubelet[5096]: E1031 17:14:37.858515    5096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.387257  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:38 kubernetes-upgrade-171032 kubelet[5108]: E1031 17:14:38.608993    5108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.387619  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:39 kubernetes-upgrade-171032 kubelet[5119]: E1031 17:14:39.358471    5119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.387980  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5130]: E1031 17:14:40.108054    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.388373  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5141]: E1031 17:14:40.857996    5141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.388735  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:41 kubernetes-upgrade-171032 kubelet[5152]: E1031 17:14:41.608805    5152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.389101  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:42 kubernetes-upgrade-171032 kubelet[5162]: E1031 17:14:42.359435    5162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.389491  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5172]: E1031 17:14:43.111495    5172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.389859  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5183]: E1031 17:14:43.862725    5183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.390221  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.390585  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.390970  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.391343  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.391727  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.392188  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:48 kubernetes-upgrade-171032 kubelet[5388]: E1031 17:14:48.386710    5388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.392548  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5399]: E1031 17:14:49.112768    5399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.392911  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5410]: E1031 17:14:49.858019    5410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.393274  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:50 kubernetes-upgrade-171032 kubelet[5421]: E1031 17:14:50.608344    5421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.393623  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:51 kubernetes-upgrade-171032 kubelet[5432]: E1031 17:14:51.366721    5432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.393989  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5444]: E1031 17:14:52.113463    5444 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.394336  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5455]: E1031 17:14:52.857207    5455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.394686  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:53 kubernetes-upgrade-171032 kubelet[5466]: E1031 17:14:53.606136    5466 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.395041  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:54 kubernetes-upgrade-171032 kubelet[5477]: E1031 17:14:54.357248    5477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.395410  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.395758  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.396134  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.396533  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.396909  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.397267  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5680]: E1031 17:14:58.858501    5680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.397614  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:59 kubernetes-upgrade-171032 kubelet[5691]: E1031 17:14:59.609000    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.397966  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:00 kubernetes-upgrade-171032 kubelet[5702]: E1031 17:15:00.357407    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.398317  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5713]: E1031 17:15:01.112374    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.398682  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5725]: E1031 17:15:01.861185    5725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.399079  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:02 kubernetes-upgrade-171032 kubelet[5737]: E1031 17:15:02.611197    5737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.399427  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:03 kubernetes-upgrade-171032 kubelet[5749]: E1031 17:15:03.359242    5749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.399765  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5760]: E1031 17:15:04.109453    5760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.400142  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5771]: E1031 17:15:04.864773    5771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.400497  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.400841  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.401191  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.401537  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.401890  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.402240  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:09 kubernetes-upgrade-171032 kubelet[5976]: E1031 17:15:09.357321    5976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.402598  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5987]: E1031 17:15:10.107811    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.402951  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5998]: E1031 17:15:10.858421    5998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.403307  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:11 kubernetes-upgrade-171032 kubelet[6008]: E1031 17:15:11.622213    6008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.403653  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:12 kubernetes-upgrade-171032 kubelet[6020]: E1031 17:15:12.363455    6020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.403997  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6031]: E1031 17:15:13.117430    6031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.404361  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6044]: E1031 17:15:13.860525    6044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.404714  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:14 kubernetes-upgrade-171032 kubelet[6055]: E1031 17:15:14.616489    6055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.405059  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:15 kubernetes-upgrade-171032 kubelet[6066]: E1031 17:15:15.358075    6066 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.405403  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.405745  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.406101  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.406456  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.406798  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:19.406916  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:15:19.406937  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:15:19.424500  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:15:19.424532  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:15:19.481491  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:15:19.481515  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:15:19.481525  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:15:19.518598  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:15:19.518642  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:15:19.547526  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:19.547552  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:15:19.547673  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:15:19.547692  190637 out.go:239]   Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.547700  190637 out.go:239]   Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.547712  190637 out.go:239]   Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.547723  190637 out.go:239]   Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:19.547735  190637 out.go:239]   Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:19.547741  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:19.547749  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:15:29.548218  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:15:29.658909  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:15:29.658981  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:15:29.683560  190637 cri.go:87] found id: ""
	I1031 17:15:29.683584  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.683589  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:15:29.683594  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:15:29.683641  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:15:29.708731  190637 cri.go:87] found id: ""
	I1031 17:15:29.708757  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.708765  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:15:29.708776  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:15:29.708836  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:15:29.734210  190637 cri.go:87] found id: ""
	I1031 17:15:29.734239  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.734246  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:15:29.734252  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:15:29.734311  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:15:29.759194  190637 cri.go:87] found id: ""
	I1031 17:15:29.759226  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.759236  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:15:29.759244  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:15:29.759294  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:15:29.784404  190637 cri.go:87] found id: ""
	I1031 17:15:29.784431  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.784437  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:15:29.784442  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:15:29.784488  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:15:29.809055  190637 cri.go:87] found id: ""
	I1031 17:15:29.809085  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.809092  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:15:29.809098  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:15:29.809143  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:15:29.833420  190637 cri.go:87] found id: ""
	I1031 17:15:29.833451  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.833457  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:15:29.833463  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:15:29.833512  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:15:29.857456  190637 cri.go:87] found id: ""
	I1031 17:15:29.857482  190637 logs.go:274] 0 containers: []
	W1031 17:15:29.857495  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:15:29.857507  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:15:29.857520  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:15:29.875735  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5130]: E1031 17:14:40.108054    5130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.876167  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:40 kubernetes-upgrade-171032 kubelet[5141]: E1031 17:14:40.857996    5141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.876551  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:41 kubernetes-upgrade-171032 kubelet[5152]: E1031 17:14:41.608805    5152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.876914  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:42 kubernetes-upgrade-171032 kubelet[5162]: E1031 17:14:42.359435    5162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.877296  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5172]: E1031 17:14:43.111495    5172 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.877694  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:43 kubernetes-upgrade-171032 kubelet[5183]: E1031 17:14:43.862725    5183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.878053  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:44 kubernetes-upgrade-171032 kubelet[5194]: E1031 17:14:44.611361    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.878415  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:45 kubernetes-upgrade-171032 kubelet[5205]: E1031 17:14:45.368951    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.878774  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5215]: E1031 17:14:46.117692    5215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.879139  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:46 kubernetes-upgrade-171032 kubelet[5226]: E1031 17:14:46.861837    5226 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.879503  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:47 kubernetes-upgrade-171032 kubelet[5239]: E1031 17:14:47.615974    5239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.879859  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:48 kubernetes-upgrade-171032 kubelet[5388]: E1031 17:14:48.386710    5388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.880296  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5399]: E1031 17:14:49.112768    5399 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.880675  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:49 kubernetes-upgrade-171032 kubelet[5410]: E1031 17:14:49.858019    5410 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.881048  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:50 kubernetes-upgrade-171032 kubelet[5421]: E1031 17:14:50.608344    5421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.881452  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:51 kubernetes-upgrade-171032 kubelet[5432]: E1031 17:14:51.366721    5432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.881836  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5444]: E1031 17:14:52.113463    5444 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.882244  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5455]: E1031 17:14:52.857207    5455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.882629  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:53 kubernetes-upgrade-171032 kubelet[5466]: E1031 17:14:53.606136    5466 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.882997  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:54 kubernetes-upgrade-171032 kubelet[5477]: E1031 17:14:54.357248    5477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.883351  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.883718  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.884108  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.884479  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.884838  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.885198  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5680]: E1031 17:14:58.858501    5680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.885556  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:59 kubernetes-upgrade-171032 kubelet[5691]: E1031 17:14:59.609000    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.885914  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:00 kubernetes-upgrade-171032 kubelet[5702]: E1031 17:15:00.357407    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.886279  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5713]: E1031 17:15:01.112374    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.886650  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5725]: E1031 17:15:01.861185    5725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.887002  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:02 kubernetes-upgrade-171032 kubelet[5737]: E1031 17:15:02.611197    5737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.887360  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:03 kubernetes-upgrade-171032 kubelet[5749]: E1031 17:15:03.359242    5749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.887715  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5760]: E1031 17:15:04.109453    5760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.888104  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5771]: E1031 17:15:04.864773    5771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.888519  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.888897  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.889405  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.889785  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.890151  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.890519  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:09 kubernetes-upgrade-171032 kubelet[5976]: E1031 17:15:09.357321    5976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.890884  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5987]: E1031 17:15:10.107811    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.891246  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5998]: E1031 17:15:10.858421    5998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.891618  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:11 kubernetes-upgrade-171032 kubelet[6008]: E1031 17:15:11.622213    6008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.891975  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:12 kubernetes-upgrade-171032 kubelet[6020]: E1031 17:15:12.363455    6020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.892411  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6031]: E1031 17:15:13.117430    6031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.892805  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6044]: E1031 17:15:13.860525    6044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.893169  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:14 kubernetes-upgrade-171032 kubelet[6055]: E1031 17:15:14.616489    6055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.893543  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:15 kubernetes-upgrade-171032 kubelet[6066]: E1031 17:15:15.358075    6066 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.893917  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.894282  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.894647  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.895018  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.895571  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.896096  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6273]: E1031 17:15:19.860390    6273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.896520  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:20 kubernetes-upgrade-171032 kubelet[6284]: E1031 17:15:20.609113    6284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.896927  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:21 kubernetes-upgrade-171032 kubelet[6295]: E1031 17:15:21.358879    6295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.897398  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6307]: E1031 17:15:22.107091    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.897911  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6317]: E1031 17:15:22.857331    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.898264  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:23 kubernetes-upgrade-171032 kubelet[6327]: E1031 17:15:23.607677    6327 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.898618  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:24 kubernetes-upgrade-171032 kubelet[6338]: E1031 17:15:24.366315    6338 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.899080  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6349]: E1031 17:15:25.125716    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.899447  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6360]: E1031 17:15:25.863919    6360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.899804  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.900202  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.900552  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.900914  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:29.901261  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:29.901378  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:15:29.901395  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:15:29.919151  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:15:29.919190  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:15:29.977920  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:15:29.977942  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:15:29.977955  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:15:30.014713  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:15:30.014753  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:15:30.043075  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:30.043100  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:15:30.043208  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:15:30.043226  190637 out.go:239]   Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:30.043235  190637 out.go:239]   Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:30.043247  190637 out.go:239]   Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:30.043254  190637 out.go:239]   Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:30.043263  190637 out.go:239]   Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:30.043267  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:30.043275  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:15:40.044169  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:15:40.158802  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:15:40.158880  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:15:40.183583  190637 cri.go:87] found id: ""
	I1031 17:15:40.183615  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.183622  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:15:40.183628  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:15:40.183690  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:15:40.208726  190637 cri.go:87] found id: ""
	I1031 17:15:40.208774  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.208782  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:15:40.208788  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:15:40.208838  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:15:40.233664  190637 cri.go:87] found id: ""
	I1031 17:15:40.233691  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.233698  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:15:40.233704  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:15:40.233757  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:15:40.258979  190637 cri.go:87] found id: ""
	I1031 17:15:40.259008  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.259014  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:15:40.259020  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:15:40.259069  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:15:40.282772  190637 cri.go:87] found id: ""
	I1031 17:15:40.282797  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.282804  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:15:40.282812  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:15:40.282872  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:15:40.307653  190637 cri.go:87] found id: ""
	I1031 17:15:40.307681  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.307687  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:15:40.307693  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:15:40.307742  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:15:40.332132  190637 cri.go:87] found id: ""
	I1031 17:15:40.332162  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.332169  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:15:40.332176  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:15:40.332223  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:15:40.356646  190637 cri.go:87] found id: ""
	I1031 17:15:40.356677  190637 logs.go:274] 0 containers: []
	W1031 17:15:40.356686  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:15:40.356699  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:15:40.356713  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:15:40.374551  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:50 kubernetes-upgrade-171032 kubelet[5421]: E1031 17:14:50.608344    5421 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.374917  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:51 kubernetes-upgrade-171032 kubelet[5432]: E1031 17:14:51.366721    5432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.375264  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5444]: E1031 17:14:52.113463    5444 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.375613  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:52 kubernetes-upgrade-171032 kubelet[5455]: E1031 17:14:52.857207    5455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.375956  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:53 kubernetes-upgrade-171032 kubelet[5466]: E1031 17:14:53.606136    5466 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.376382  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:54 kubernetes-upgrade-171032 kubelet[5477]: E1031 17:14:54.357248    5477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.376735  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5488]: E1031 17:14:55.109823    5488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.377087  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:55 kubernetes-upgrade-171032 kubelet[5499]: E1031 17:14:55.858792    5499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.377443  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:56 kubernetes-upgrade-171032 kubelet[5510]: E1031 17:14:56.607649    5510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.377801  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:57 kubernetes-upgrade-171032 kubelet[5521]: E1031 17:14:57.356667    5521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.378148  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5532]: E1031 17:14:58.108121    5532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.378495  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:58 kubernetes-upgrade-171032 kubelet[5680]: E1031 17:14:58.858501    5680 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.378832  190637 logs.go:138] Found kubelet problem: Oct 31 17:14:59 kubernetes-upgrade-171032 kubelet[5691]: E1031 17:14:59.609000    5691 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.379174  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:00 kubernetes-upgrade-171032 kubelet[5702]: E1031 17:15:00.357407    5702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.379524  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5713]: E1031 17:15:01.112374    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.379867  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5725]: E1031 17:15:01.861185    5725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.380238  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:02 kubernetes-upgrade-171032 kubelet[5737]: E1031 17:15:02.611197    5737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.380591  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:03 kubernetes-upgrade-171032 kubelet[5749]: E1031 17:15:03.359242    5749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.380939  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5760]: E1031 17:15:04.109453    5760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.381284  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5771]: E1031 17:15:04.864773    5771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.381627  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.381966  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.382315  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.382659  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.383017  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.383367  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:09 kubernetes-upgrade-171032 kubelet[5976]: E1031 17:15:09.357321    5976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.383715  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5987]: E1031 17:15:10.107811    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.384181  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5998]: E1031 17:15:10.858421    5998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.384573  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:11 kubernetes-upgrade-171032 kubelet[6008]: E1031 17:15:11.622213    6008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.384927  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:12 kubernetes-upgrade-171032 kubelet[6020]: E1031 17:15:12.363455    6020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.385334  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6031]: E1031 17:15:13.117430    6031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.385720  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6044]: E1031 17:15:13.860525    6044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.386075  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:14 kubernetes-upgrade-171032 kubelet[6055]: E1031 17:15:14.616489    6055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.386436  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:15 kubernetes-upgrade-171032 kubelet[6066]: E1031 17:15:15.358075    6066 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.386813  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.387213  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.387589  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.387964  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.388444  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.388793  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6273]: E1031 17:15:19.860390    6273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.389137  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:20 kubernetes-upgrade-171032 kubelet[6284]: E1031 17:15:20.609113    6284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.389490  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:21 kubernetes-upgrade-171032 kubelet[6295]: E1031 17:15:21.358879    6295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.389827  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6307]: E1031 17:15:22.107091    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.390177  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6317]: E1031 17:15:22.857331    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.390530  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:23 kubernetes-upgrade-171032 kubelet[6327]: E1031 17:15:23.607677    6327 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.390877  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:24 kubernetes-upgrade-171032 kubelet[6338]: E1031 17:15:24.366315    6338 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.391233  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6349]: E1031 17:15:25.125716    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.391581  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6360]: E1031 17:15:25.863919    6360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.391931  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.392301  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.392656  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.393007  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.393357  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.393701  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:30 kubernetes-upgrade-171032 kubelet[6565]: E1031 17:15:30.356863    6565 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.394042  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6576]: E1031 17:15:31.107809    6576 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.394409  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6587]: E1031 17:15:31.856941    6587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.394763  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:32 kubernetes-upgrade-171032 kubelet[6598]: E1031 17:15:32.608356    6598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.395128  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:33 kubernetes-upgrade-171032 kubelet[6609]: E1031 17:15:33.355994    6609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.395504  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6620]: E1031 17:15:34.107190    6620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.395885  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6631]: E1031 17:15:34.857246    6631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.396291  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:35 kubernetes-upgrade-171032 kubelet[6643]: E1031 17:15:35.608333    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.396671  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:36 kubernetes-upgrade-171032 kubelet[6654]: E1031 17:15:36.357567    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.397045  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6665]: E1031 17:15:37.108447    6665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.397423  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6677]: E1031 17:15:37.856194    6677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.397797  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:38 kubernetes-upgrade-171032 kubelet[6688]: E1031 17:15:38.605572    6688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.398185  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:39 kubernetes-upgrade-171032 kubelet[6699]: E1031 17:15:39.359473    6699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.398564  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6713]: E1031 17:15:40.105943    6713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:40.398712  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:15:40.398730  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:15:40.416274  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:15:40.416316  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:15:40.472785  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:15:40.472812  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:15:40.472835  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:15:40.509920  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:15:40.509959  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:15:40.537424  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:40.537453  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:15:40.537606  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:15:40.537621  190637 out.go:239]   Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6665]: E1031 17:15:37.108447    6665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6665]: E1031 17:15:37.108447    6665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.537628  190637 out.go:239]   Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6677]: E1031 17:15:37.856194    6677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6677]: E1031 17:15:37.856194    6677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.537635  190637 out.go:239]   Oct 31 17:15:38 kubernetes-upgrade-171032 kubelet[6688]: E1031 17:15:38.605572    6688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:38 kubernetes-upgrade-171032 kubelet[6688]: E1031 17:15:38.605572    6688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.537643  190637 out.go:239]   Oct 31 17:15:39 kubernetes-upgrade-171032 kubelet[6699]: E1031 17:15:39.359473    6699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:39 kubernetes-upgrade-171032 kubelet[6699]: E1031 17:15:39.359473    6699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:40.537656  190637 out.go:239]   Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6713]: E1031 17:15:40.105943    6713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6713]: E1031 17:15:40.105943    6713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:40.537662  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:40.537675  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:15:50.539109  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:15:50.659202  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:15:50.659278  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:15:50.685378  190637 cri.go:87] found id: ""
	I1031 17:15:50.685405  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.685410  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:15:50.685416  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:15:50.685460  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:15:50.712375  190637 cri.go:87] found id: ""
	I1031 17:15:50.712403  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.712410  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:15:50.712417  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:15:50.712462  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:15:50.736437  190637 cri.go:87] found id: ""
	I1031 17:15:50.736462  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.736468  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:15:50.736473  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:15:50.736518  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:15:50.760675  190637 cri.go:87] found id: ""
	I1031 17:15:50.760706  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.760713  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:15:50.760719  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:15:50.760766  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:15:50.786904  190637 cri.go:87] found id: ""
	I1031 17:15:50.786927  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.786934  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:15:50.786940  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:15:50.786991  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:15:50.811497  190637 cri.go:87] found id: ""
	I1031 17:15:50.811520  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.811525  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:15:50.811531  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:15:50.811575  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:15:50.837187  190637 cri.go:87] found id: ""
	I1031 17:15:50.837215  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.837222  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:15:50.837228  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:15:50.837276  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:15:50.861517  190637 cri.go:87] found id: ""
	I1031 17:15:50.861549  190637 logs.go:274] 0 containers: []
	W1031 17:15:50.861558  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:15:50.861567  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:15:50.861578  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:15:50.897201  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:15:50.897234  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:15:50.925346  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:15:50.925377  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:15:50.942359  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5713]: E1031 17:15:01.112374    5713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.942722  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:01 kubernetes-upgrade-171032 kubelet[5725]: E1031 17:15:01.861185    5725 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.943089  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:02 kubernetes-upgrade-171032 kubelet[5737]: E1031 17:15:02.611197    5737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.943441  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:03 kubernetes-upgrade-171032 kubelet[5749]: E1031 17:15:03.359242    5749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.943787  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5760]: E1031 17:15:04.109453    5760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.944169  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:04 kubernetes-upgrade-171032 kubelet[5771]: E1031 17:15:04.864773    5771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.944520  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:05 kubernetes-upgrade-171032 kubelet[5782]: E1031 17:15:05.622804    5782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.944896  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:06 kubernetes-upgrade-171032 kubelet[5794]: E1031 17:15:06.358628    5794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.945380  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5804]: E1031 17:15:07.107840    5804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.945866  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:07 kubernetes-upgrade-171032 kubelet[5815]: E1031 17:15:07.857348    5815 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.946302  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:08 kubernetes-upgrade-171032 kubelet[5828]: E1031 17:15:08.608144    5828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.946686  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:09 kubernetes-upgrade-171032 kubelet[5976]: E1031 17:15:09.357321    5976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.947068  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5987]: E1031 17:15:10.107811    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.947416  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:10 kubernetes-upgrade-171032 kubelet[5998]: E1031 17:15:10.858421    5998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.947950  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:11 kubernetes-upgrade-171032 kubelet[6008]: E1031 17:15:11.622213    6008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.948612  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:12 kubernetes-upgrade-171032 kubelet[6020]: E1031 17:15:12.363455    6020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.949021  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6031]: E1031 17:15:13.117430    6031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.949383  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6044]: E1031 17:15:13.860525    6044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.949745  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:14 kubernetes-upgrade-171032 kubelet[6055]: E1031 17:15:14.616489    6055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.950099  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:15 kubernetes-upgrade-171032 kubelet[6066]: E1031 17:15:15.358075    6066 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.950474  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.950844  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.951192  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.951536  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.951878  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.952292  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6273]: E1031 17:15:19.860390    6273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.952648  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:20 kubernetes-upgrade-171032 kubelet[6284]: E1031 17:15:20.609113    6284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.953045  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:21 kubernetes-upgrade-171032 kubelet[6295]: E1031 17:15:21.358879    6295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.953477  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6307]: E1031 17:15:22.107091    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.953829  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6317]: E1031 17:15:22.857331    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.954176  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:23 kubernetes-upgrade-171032 kubelet[6327]: E1031 17:15:23.607677    6327 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.954527  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:24 kubernetes-upgrade-171032 kubelet[6338]: E1031 17:15:24.366315    6338 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.954870  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6349]: E1031 17:15:25.125716    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.955225  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6360]: E1031 17:15:25.863919    6360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.955575  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.955980  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.956440  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.956991  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.957570  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.958172  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:30 kubernetes-upgrade-171032 kubelet[6565]: E1031 17:15:30.356863    6565 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.958625  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6576]: E1031 17:15:31.107809    6576 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.958976  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6587]: E1031 17:15:31.856941    6587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.959321  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:32 kubernetes-upgrade-171032 kubelet[6598]: E1031 17:15:32.608356    6598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.959678  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:33 kubernetes-upgrade-171032 kubelet[6609]: E1031 17:15:33.355994    6609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.960022  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6620]: E1031 17:15:34.107190    6620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.960404  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6631]: E1031 17:15:34.857246    6631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.960746  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:35 kubernetes-upgrade-171032 kubelet[6643]: E1031 17:15:35.608333    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.961095  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:36 kubernetes-upgrade-171032 kubelet[6654]: E1031 17:15:36.357567    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.961450  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6665]: E1031 17:15:37.108447    6665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.961795  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6677]: E1031 17:15:37.856194    6677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.962145  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:38 kubernetes-upgrade-171032 kubelet[6688]: E1031 17:15:38.605572    6688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.962493  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:39 kubernetes-upgrade-171032 kubelet[6699]: E1031 17:15:39.359473    6699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.962849  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6713]: E1031 17:15:40.105943    6713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.963234  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6862]: E1031 17:15:40.857154    6862 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.963591  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:41 kubernetes-upgrade-171032 kubelet[6872]: E1031 17:15:41.607382    6872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.963942  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:42 kubernetes-upgrade-171032 kubelet[6883]: E1031 17:15:42.357594    6883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.964316  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:43 kubernetes-upgrade-171032 kubelet[6895]: E1031 17:15:43.109416    6895 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.964670  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:43 kubernetes-upgrade-171032 kubelet[6905]: E1031 17:15:43.857214    6905 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.965018  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:44 kubernetes-upgrade-171032 kubelet[6916]: E1031 17:15:44.614966    6916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.965407  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:45 kubernetes-upgrade-171032 kubelet[6927]: E1031 17:15:45.374185    6927 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.965756  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:46 kubernetes-upgrade-171032 kubelet[6937]: E1031 17:15:46.108936    6937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.966099  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:46 kubernetes-upgrade-171032 kubelet[6947]: E1031 17:15:46.860100    6947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.966460  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:47 kubernetes-upgrade-171032 kubelet[6958]: E1031 17:15:47.608695    6958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.966808  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:48 kubernetes-upgrade-171032 kubelet[6970]: E1031 17:15:48.357505    6970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.967148  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6980]: E1031 17:15:49.111737    6980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.967495  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6991]: E1031 17:15:49.860784    6991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:50.967837  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:50 kubernetes-upgrade-171032 kubelet[7003]: E1031 17:15:50.609888    7003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:50.967970  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:15:50.967992  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:15:50.985375  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:15:50.985405  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:15:51.042191  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:15:51.042222  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:51.042236  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:15:51.042353  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:15:51.042372  190637 out.go:239]   Oct 31 17:15:47 kubernetes-upgrade-171032 kubelet[6958]: E1031 17:15:47.608695    6958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:47 kubernetes-upgrade-171032 kubelet[6958]: E1031 17:15:47.608695    6958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:51.042381  190637 out.go:239]   Oct 31 17:15:48 kubernetes-upgrade-171032 kubelet[6970]: E1031 17:15:48.357505    6970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:48 kubernetes-upgrade-171032 kubelet[6970]: E1031 17:15:48.357505    6970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:51.042389  190637 out.go:239]   Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6980]: E1031 17:15:49.111737    6980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6980]: E1031 17:15:49.111737    6980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:51.042400  190637 out.go:239]   Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6991]: E1031 17:15:49.860784    6991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6991]: E1031 17:15:49.860784    6991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:15:51.042406  190637 out.go:239]   Oct 31 17:15:50 kubernetes-upgrade-171032 kubelet[7003]: E1031 17:15:50.609888    7003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:50 kubernetes-upgrade-171032 kubelet[7003]: E1031 17:15:50.609888    7003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:15:51.042413  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:15:51.042418  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:16:01.043121  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:16:01.158572  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:16:01.158650  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:16:01.184635  190637 cri.go:87] found id: ""
	I1031 17:16:01.184658  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.184664  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:16:01.184670  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:16:01.184711  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:16:01.209372  190637 cri.go:87] found id: ""
	I1031 17:16:01.209399  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.209407  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:16:01.209414  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:16:01.209472  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:16:01.234310  190637 cri.go:87] found id: ""
	I1031 17:16:01.234337  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.234345  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:16:01.234352  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:16:01.234406  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:16:01.258609  190637 cri.go:87] found id: ""
	I1031 17:16:01.258633  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.258639  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:16:01.258646  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:16:01.258692  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:16:01.283255  190637 cri.go:87] found id: ""
	I1031 17:16:01.283280  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.283290  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:16:01.283296  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:16:01.283356  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:16:01.308381  190637 cri.go:87] found id: ""
	I1031 17:16:01.308405  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.308411  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:16:01.308417  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:16:01.308459  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:16:01.334547  190637 cri.go:87] found id: ""
	I1031 17:16:01.334585  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.334594  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:16:01.334605  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:16:01.334661  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:16:01.359769  190637 cri.go:87] found id: ""
	I1031 17:16:01.359800  190637 logs.go:274] 0 containers: []
	W1031 17:16:01.359810  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:16:01.359821  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:16:01.359837  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:16:01.376712  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:11 kubernetes-upgrade-171032 kubelet[6008]: E1031 17:15:11.622213    6008 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.377326  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:12 kubernetes-upgrade-171032 kubelet[6020]: E1031 17:15:12.363455    6020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.377915  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6031]: E1031 17:15:13.117430    6031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.378495  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:13 kubernetes-upgrade-171032 kubelet[6044]: E1031 17:15:13.860525    6044 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.379077  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:14 kubernetes-upgrade-171032 kubelet[6055]: E1031 17:15:14.616489    6055 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.379641  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:15 kubernetes-upgrade-171032 kubelet[6066]: E1031 17:15:15.358075    6066 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.380230  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6077]: E1031 17:15:16.110959    6077 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.380767  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:16 kubernetes-upgrade-171032 kubelet[6088]: E1031 17:15:16.862250    6088 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.381155  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:17 kubernetes-upgrade-171032 kubelet[6099]: E1031 17:15:17.607243    6099 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.381541  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:18 kubernetes-upgrade-171032 kubelet[6111]: E1031 17:15:18.357406    6111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.381949  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6124]: E1031 17:15:19.112583    6124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.382414  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:19 kubernetes-upgrade-171032 kubelet[6273]: E1031 17:15:19.860390    6273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.382961  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:20 kubernetes-upgrade-171032 kubelet[6284]: E1031 17:15:20.609113    6284 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.383343  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:21 kubernetes-upgrade-171032 kubelet[6295]: E1031 17:15:21.358879    6295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.383806  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6307]: E1031 17:15:22.107091    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.384430  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:22 kubernetes-upgrade-171032 kubelet[6317]: E1031 17:15:22.857331    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.385091  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:23 kubernetes-upgrade-171032 kubelet[6327]: E1031 17:15:23.607677    6327 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.385533  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:24 kubernetes-upgrade-171032 kubelet[6338]: E1031 17:15:24.366315    6338 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.385982  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6349]: E1031 17:15:25.125716    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.386378  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:25 kubernetes-upgrade-171032 kubelet[6360]: E1031 17:15:25.863919    6360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.386945  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:26 kubernetes-upgrade-171032 kubelet[6372]: E1031 17:15:26.607312    6372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.387342  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:27 kubernetes-upgrade-171032 kubelet[6384]: E1031 17:15:27.357819    6384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.387741  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6395]: E1031 17:15:28.106820    6395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.388129  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:28 kubernetes-upgrade-171032 kubelet[6407]: E1031 17:15:28.855886    6407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.388495  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:29 kubernetes-upgrade-171032 kubelet[6420]: E1031 17:15:29.607357    6420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.388864  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:30 kubernetes-upgrade-171032 kubelet[6565]: E1031 17:15:30.356863    6565 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.389214  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6576]: E1031 17:15:31.107809    6576 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.389559  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:31 kubernetes-upgrade-171032 kubelet[6587]: E1031 17:15:31.856941    6587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.389939  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:32 kubernetes-upgrade-171032 kubelet[6598]: E1031 17:15:32.608356    6598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.390299  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:33 kubernetes-upgrade-171032 kubelet[6609]: E1031 17:15:33.355994    6609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.390647  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6620]: E1031 17:15:34.107190    6620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.390996  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:34 kubernetes-upgrade-171032 kubelet[6631]: E1031 17:15:34.857246    6631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.391370  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:35 kubernetes-upgrade-171032 kubelet[6643]: E1031 17:15:35.608333    6643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.391768  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:36 kubernetes-upgrade-171032 kubelet[6654]: E1031 17:15:36.357567    6654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.392220  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6665]: E1031 17:15:37.108447    6665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.392642  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:37 kubernetes-upgrade-171032 kubelet[6677]: E1031 17:15:37.856194    6677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.393090  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:38 kubernetes-upgrade-171032 kubelet[6688]: E1031 17:15:38.605572    6688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.393510  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:39 kubernetes-upgrade-171032 kubelet[6699]: E1031 17:15:39.359473    6699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.393888  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6713]: E1031 17:15:40.105943    6713 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.394284  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:40 kubernetes-upgrade-171032 kubelet[6862]: E1031 17:15:40.857154    6862 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.394654  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:41 kubernetes-upgrade-171032 kubelet[6872]: E1031 17:15:41.607382    6872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.395031  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:42 kubernetes-upgrade-171032 kubelet[6883]: E1031 17:15:42.357594    6883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.395411  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:43 kubernetes-upgrade-171032 kubelet[6895]: E1031 17:15:43.109416    6895 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.395790  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:43 kubernetes-upgrade-171032 kubelet[6905]: E1031 17:15:43.857214    6905 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.396208  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:44 kubernetes-upgrade-171032 kubelet[6916]: E1031 17:15:44.614966    6916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.396621  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:45 kubernetes-upgrade-171032 kubelet[6927]: E1031 17:15:45.374185    6927 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.397006  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:46 kubernetes-upgrade-171032 kubelet[6937]: E1031 17:15:46.108936    6937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.397381  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:46 kubernetes-upgrade-171032 kubelet[6947]: E1031 17:15:46.860100    6947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.397753  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:47 kubernetes-upgrade-171032 kubelet[6958]: E1031 17:15:47.608695    6958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.398128  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:48 kubernetes-upgrade-171032 kubelet[6970]: E1031 17:15:48.357505    6970 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.398509  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6980]: E1031 17:15:49.111737    6980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.398884  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:49 kubernetes-upgrade-171032 kubelet[6991]: E1031 17:15:49.860784    6991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.399262  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:50 kubernetes-upgrade-171032 kubelet[7003]: E1031 17:15:50.609888    7003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.399636  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:51 kubernetes-upgrade-171032 kubelet[7151]: E1031 17:15:51.358899    7151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.400010  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:52 kubernetes-upgrade-171032 kubelet[7162]: E1031 17:15:52.108432    7162 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.400395  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:52 kubernetes-upgrade-171032 kubelet[7173]: E1031 17:15:52.857712    7173 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.400772  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:53 kubernetes-upgrade-171032 kubelet[7185]: E1031 17:15:53.604681    7185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.401152  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:54 kubernetes-upgrade-171032 kubelet[7196]: E1031 17:15:54.360229    7196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.401610  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:55 kubernetes-upgrade-171032 kubelet[7207]: E1031 17:15:55.107781    7207 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.402000  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:55 kubernetes-upgrade-171032 kubelet[7218]: E1031 17:15:55.856993    7218 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.402383  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:56 kubernetes-upgrade-171032 kubelet[7229]: E1031 17:15:56.613075    7229 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.402762  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:57 kubernetes-upgrade-171032 kubelet[7239]: E1031 17:15:57.359261    7239 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.403149  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7250]: E1031 17:15:58.108178    7250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.403554  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7261]: E1031 17:15:58.855512    7261 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.403933  190637 logs.go:138] Found kubelet problem: Oct 31 17:15:59 kubernetes-upgrade-171032 kubelet[7273]: E1031 17:15:59.608918    7273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.404310  190637 logs.go:138] Found kubelet problem: Oct 31 17:16:00 kubernetes-upgrade-171032 kubelet[7285]: E1031 17:16:00.357758    7285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.404661  190637 logs.go:138] Found kubelet problem: Oct 31 17:16:01 kubernetes-upgrade-171032 kubelet[7298]: E1031 17:16:01.109550    7298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:16:01.404780  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:16:01.404794  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:16:01.424410  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:16:01.424445  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:16:01.484424  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:16:01.484455  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:16:01.484466  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:16:01.523363  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:16:01.523396  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 17:16:01.553322  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:16:01.553352  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 17:16:01.553472  190637 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1031 17:16:01.553487  190637 out.go:239]   Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7250]: E1031 17:15:58.108178    7250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7250]: E1031 17:15:58.108178    7250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.553495  190637 out.go:239]   Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7261]: E1031 17:15:58.855512    7261 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:58 kubernetes-upgrade-171032 kubelet[7261]: E1031 17:15:58.855512    7261 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.553503  190637 out.go:239]   Oct 31 17:15:59 kubernetes-upgrade-171032 kubelet[7273]: E1031 17:15:59.608918    7273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:15:59 kubernetes-upgrade-171032 kubelet[7273]: E1031 17:15:59.608918    7273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.553514  190637 out.go:239]   Oct 31 17:16:00 kubernetes-upgrade-171032 kubelet[7285]: E1031 17:16:00.357758    7285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:16:00 kubernetes-upgrade-171032 kubelet[7285]: E1031 17:16:00.357758    7285 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:16:01.553529  190637 out.go:239]   Oct 31 17:16:01 kubernetes-upgrade-171032 kubelet[7298]: E1031 17:16:01.109550    7298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Oct 31 17:16:01 kubernetes-upgrade-171032 kubelet[7298]: E1031 17:16:01.109550    7298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:16:01.553542  190637 out.go:309] Setting ErrFile to fd 2...
	I1031 17:16:01.553550  190637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:16:11.556460  190637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:16:11.567521  190637 kubeadm.go:631] restartCluster took 4m10.041521407s
	W1031 17:16:11.567664  190637 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1031 17:16:11.567696  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:16:13.709150  190637 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.141428686s)
	I1031 17:16:13.709215  190637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:16:13.723610  190637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:16:13.733630  190637 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:16:13.733686  190637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:16:13.743379  190637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:16:13.743430  190637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:16:13.790613  190637 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1031 17:16:13.790688  190637 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:16:13.821183  190637 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:16:13.821272  190637 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:16:13.821321  190637 kubeadm.go:317] OS: Linux
	I1031 17:16:13.821386  190637 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:16:13.821493  190637 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:16:13.821589  190637 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:16:13.821666  190637 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:16:13.821732  190637 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:16:13.821822  190637 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:16:13.821875  190637 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:16:13.821917  190637 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:16:13.822044  190637 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:16:13.889773  190637 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:16:13.889934  190637 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:16:13.890088  190637 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:16:14.018753  190637 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:16:14.020712  190637 out.go:204]   - Generating certificates and keys ...
	I1031 17:16:14.020873  190637 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1031 17:16:14.020988  190637 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1031 17:16:14.021104  190637 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 17:16:14.021188  190637 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1031 17:16:14.021290  190637 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 17:16:14.021373  190637 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1031 17:16:14.021456  190637 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1031 17:16:14.021540  190637 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1031 17:16:14.021648  190637 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 17:16:14.021748  190637 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 17:16:14.021839  190637 kubeadm.go:317] [certs] Using the existing "sa" key
	I1031 17:16:14.021922  190637 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:16:14.084943  190637 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:16:14.183259  190637 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:16:14.320739  190637 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:16:14.500911  190637 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:16:14.513207  190637 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:16:14.514069  190637 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:16:14.514147  190637 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1031 17:16:14.607954  190637 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:16:14.609645  190637 out.go:204]   - Booting up control plane ...
	I1031 17:16:14.609766  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:16:14.610668  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:16:14.611660  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:16:14.612690  190637 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:16:14.615783  190637 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:16:54.616640  190637 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1031 17:16:54.617013  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:16:54.617215  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:16:59.618230  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:16:59.618410  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:17:09.619423  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:17:09.619634  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:17:29.620475  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:17:29.620705  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:18:09.621449  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:18:09.621747  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:18:09.621776  190637 kubeadm.go:317] 
	I1031 17:18:09.621815  190637 kubeadm.go:317] Unfortunately, an error has occurred:
	I1031 17:18:09.621883  190637 kubeadm.go:317] 	timed out waiting for the condition
	I1031 17:18:09.621912  190637 kubeadm.go:317] 
	I1031 17:18:09.621969  190637 kubeadm.go:317] This error is likely caused by:
	I1031 17:18:09.622021  190637 kubeadm.go:317] 	- The kubelet is not running
	I1031 17:18:09.622125  190637 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1031 17:18:09.622143  190637 kubeadm.go:317] 
	I1031 17:18:09.622261  190637 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1031 17:18:09.622314  190637 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1031 17:18:09.622356  190637 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1031 17:18:09.622367  190637 kubeadm.go:317] 
	I1031 17:18:09.622499  190637 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1031 17:18:09.622602  190637 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1031 17:18:09.622734  190637 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1031 17:18:09.622891  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1031 17:18:09.623023  190637 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1031 17:18:09.623099  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1031 17:18:09.625050  190637 kubeadm.go:317] W1031 17:16:13.785491    8604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:18:09.625296  190637 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:18:09.625417  190637 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:18:09.625517  190637 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1031 17:18:09.625615  190637 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1031 17:18:09.625853  190637 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:16:13.785491    8604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:16:13.785491    8604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1031 17:18:09.625896  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1031 17:18:11.507839  190637 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.881919942s)
	I1031 17:18:11.507902  190637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:18:11.518438  190637 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:18:11.518501  190637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:18:11.526309  190637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:18:11.526362  190637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:18:11.567326  190637 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1031 17:18:11.567455  190637 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:18:11.595079  190637 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:18:11.595164  190637 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:18:11.595205  190637 kubeadm.go:317] OS: Linux
	I1031 17:18:11.595267  190637 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:18:11.595330  190637 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:18:11.595385  190637 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:18:11.595458  190637 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:18:11.595549  190637 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:18:11.595628  190637 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:18:11.595689  190637 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:18:11.595753  190637 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:18:11.595830  190637 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:18:11.659765  190637 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:18:11.659892  190637 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:18:11.660026  190637 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:18:11.786453  190637 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:18:11.788742  190637 out.go:204]   - Generating certificates and keys ...
	I1031 17:18:11.788891  190637 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1031 17:18:11.788972  190637 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1031 17:18:11.789038  190637 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 17:18:11.789089  190637 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1031 17:18:11.789146  190637 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 17:18:11.789188  190637 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1031 17:18:11.789280  190637 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1031 17:18:11.789383  190637 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1031 17:18:11.789477  190637 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 17:18:11.789585  190637 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 17:18:11.789640  190637 kubeadm.go:317] [certs] Using the existing "sa" key
	I1031 17:18:11.789715  190637 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:18:11.919606  190637 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:18:12.185414  190637 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:18:12.423645  190637 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:18:12.512469  190637 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:18:12.525567  190637 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:18:12.527585  190637 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:18:12.527662  190637 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1031 17:18:12.614073  190637 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:18:12.616221  190637 out.go:204]   - Booting up control plane ...
	I1031 17:18:12.616358  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:18:12.617169  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:18:12.618368  190637 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:18:12.619293  190637 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:18:12.621424  190637 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:18:52.622617  190637 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1031 17:18:52.622876  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:18:52.623123  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:18:57.623915  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:18:57.624227  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:19:07.625521  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:19:07.625767  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:19:27.627057  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:19:27.627268  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:20:07.628683  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:20:07.628909  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:20:07.628946  190637 kubeadm.go:317] 
	I1031 17:20:07.629024  190637 kubeadm.go:317] Unfortunately, an error has occurred:
	I1031 17:20:07.629097  190637 kubeadm.go:317] 	timed out waiting for the condition
	I1031 17:20:07.629108  190637 kubeadm.go:317] 
	I1031 17:20:07.629163  190637 kubeadm.go:317] This error is likely caused by:
	I1031 17:20:07.629211  190637 kubeadm.go:317] 	- The kubelet is not running
	I1031 17:20:07.629344  190637 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1031 17:20:07.629353  190637 kubeadm.go:317] 
	I1031 17:20:07.629481  190637 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1031 17:20:07.629536  190637 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1031 17:20:07.629576  190637 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1031 17:20:07.629586  190637 kubeadm.go:317] 
	I1031 17:20:07.629695  190637 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1031 17:20:07.629802  190637 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1031 17:20:07.629882  190637 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1031 17:20:07.629968  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1031 17:20:07.630072  190637 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1031 17:20:07.630189  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1031 17:20:07.631485  190637 kubeadm.go:317] W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:20:07.631679  190637 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:20:07.631787  190637 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:20:07.631870  190637 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1031 17:20:07.631928  190637 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1031 17:20:07.632000  190637 kubeadm.go:398] StartCluster complete in 8m6.1386072s
	I1031 17:20:07.632058  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:20:07.632155  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:20:07.657524  190637 cri.go:87] found id: ""
	I1031 17:20:07.657554  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.657563  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:20:07.657572  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:20:07.657628  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:20:07.683067  190637 cri.go:87] found id: ""
	I1031 17:20:07.683098  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.683108  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:20:07.683117  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:20:07.683165  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:20:07.708474  190637 cri.go:87] found id: ""
	I1031 17:20:07.708498  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.708503  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:20:07.708509  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:20:07.708553  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:20:07.733306  190637 cri.go:87] found id: ""
	I1031 17:20:07.733332  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.733341  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:20:07.733349  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:20:07.733399  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:20:07.759842  190637 cri.go:87] found id: ""
	I1031 17:20:07.759870  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.759882  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:20:07.759888  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:20:07.759930  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:20:07.784929  190637 cri.go:87] found id: ""
	I1031 17:20:07.784958  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.784965  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:20:07.784970  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:20:07.785012  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:20:07.810755  190637 cri.go:87] found id: ""
	I1031 17:20:07.810785  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.810794  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:20:07.810801  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:20:07.810865  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:20:07.835591  190637 cri.go:87] found id: ""
	I1031 17:20:07.835618  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.835626  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:20:07.835636  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:20:07.835649  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:20:07.853867  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:17 kubernetes-upgrade-171032 kubelet[12551]: E1031 17:19:17.859999   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854235  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:18 kubernetes-upgrade-171032 kubelet[12563]: E1031 17:19:18.615680   12563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854617  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:19 kubernetes-upgrade-171032 kubelet[12573]: E1031 17:19:19.358228   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854968  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:20 kubernetes-upgrade-171032 kubelet[12584]: E1031 17:19:20.119095   12584 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.855310  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:20 kubernetes-upgrade-171032 kubelet[12595]: E1031 17:19:20.859438   12595 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.855654  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:21 kubernetes-upgrade-171032 kubelet[12607]: E1031 17:19:21.609296   12607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856000  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:22 kubernetes-upgrade-171032 kubelet[12618]: E1031 17:19:22.362960   12618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856397  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:23 kubernetes-upgrade-171032 kubelet[12629]: E1031 17:19:23.109818   12629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856822  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:23 kubernetes-upgrade-171032 kubelet[12640]: E1031 17:19:23.859026   12640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.857201  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:24 kubernetes-upgrade-171032 kubelet[12651]: E1031 17:19:24.608686   12651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.857575  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:25 kubernetes-upgrade-171032 kubelet[12662]: E1031 17:19:25.358423   12662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858072  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:26 kubernetes-upgrade-171032 kubelet[12673]: E1031 17:19:26.109537   12673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858511  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:26 kubernetes-upgrade-171032 kubelet[12684]: E1031 17:19:26.857818   12684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858879  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:27 kubernetes-upgrade-171032 kubelet[12694]: E1031 17:19:27.616523   12694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.859236  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:28 kubernetes-upgrade-171032 kubelet[12704]: E1031 17:19:28.359030   12704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.859630  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:29 kubernetes-upgrade-171032 kubelet[12715]: E1031 17:19:29.107927   12715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860011  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:29 kubernetes-upgrade-171032 kubelet[12726]: E1031 17:19:29.856931   12726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860406  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:30 kubernetes-upgrade-171032 kubelet[12737]: E1031 17:19:30.608904   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860778  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:31 kubernetes-upgrade-171032 kubelet[12749]: E1031 17:19:31.359475   12749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861133  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:32 kubernetes-upgrade-171032 kubelet[12760]: E1031 17:19:32.107531   12760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861482  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:32 kubernetes-upgrade-171032 kubelet[12771]: E1031 17:19:32.861374   12771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861872  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:33 kubernetes-upgrade-171032 kubelet[12782]: E1031 17:19:33.605282   12782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862228  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:34 kubernetes-upgrade-171032 kubelet[12794]: E1031 17:19:34.370314   12794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862576  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:35 kubernetes-upgrade-171032 kubelet[12805]: E1031 17:19:35.120671   12805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862954  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:35 kubernetes-upgrade-171032 kubelet[12816]: E1031 17:19:35.858702   12816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.863316  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:36 kubernetes-upgrade-171032 kubelet[12829]: E1031 17:19:36.613591   12829 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.863679  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:37 kubernetes-upgrade-171032 kubelet[12839]: E1031 17:19:37.356549   12839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864057  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:38 kubernetes-upgrade-171032 kubelet[12850]: E1031 17:19:38.107918   12850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864450  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:38 kubernetes-upgrade-171032 kubelet[12861]: E1031 17:19:38.861447   12861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864806  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:39 kubernetes-upgrade-171032 kubelet[12872]: E1031 17:19:39.609535   12872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.865220  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:40 kubernetes-upgrade-171032 kubelet[12882]: E1031 17:19:40.358351   12882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.865649  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:41 kubernetes-upgrade-171032 kubelet[12894]: E1031 17:19:41.106147   12894 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.866058  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:41 kubernetes-upgrade-171032 kubelet[12905]: E1031 17:19:41.870668   12905 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.866643  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:42 kubernetes-upgrade-171032 kubelet[12916]: E1031 17:19:42.606832   12916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.867233  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:43 kubernetes-upgrade-171032 kubelet[12927]: E1031 17:19:43.366210   12927 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.867767  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:44 kubernetes-upgrade-171032 kubelet[12938]: E1031 17:19:44.121244   12938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.868355  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:44 kubernetes-upgrade-171032 kubelet[12949]: E1031 17:19:44.915125   12949 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.868936  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:45 kubernetes-upgrade-171032 kubelet[12960]: E1031 17:19:45.614521   12960 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.869432  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:46 kubernetes-upgrade-171032 kubelet[12971]: E1031 17:19:46.357202   12971 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.870090  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:47 kubernetes-upgrade-171032 kubelet[12981]: E1031 17:19:47.110188   12981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.870680  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:47 kubernetes-upgrade-171032 kubelet[12992]: E1031 17:19:47.865319   12992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.871258  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:48 kubernetes-upgrade-171032 kubelet[13003]: E1031 17:19:48.610783   13003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.871841  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:49 kubernetes-upgrade-171032 kubelet[13014]: E1031 17:19:49.364535   13014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.872450  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:50 kubernetes-upgrade-171032 kubelet[13025]: E1031 17:19:50.118765   13025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.873030  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:50 kubernetes-upgrade-171032 kubelet[13035]: E1031 17:19:50.870666   13035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.873619  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:51 kubernetes-upgrade-171032 kubelet[13046]: E1031 17:19:51.611040   13046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.874116  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:52 kubernetes-upgrade-171032 kubelet[13058]: E1031 17:19:52.364373   13058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.874761  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:53 kubernetes-upgrade-171032 kubelet[13069]: E1031 17:19:53.118969   13069 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.875307  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:53 kubernetes-upgrade-171032 kubelet[13080]: E1031 17:19:53.864484   13080 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.875953  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:54 kubernetes-upgrade-171032 kubelet[13091]: E1031 17:19:54.624050   13091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.876555  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:55 kubernetes-upgrade-171032 kubelet[13102]: E1031 17:19:55.393244   13102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.877142  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:56 kubernetes-upgrade-171032 kubelet[13112]: E1031 17:19:56.120404   13112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.877730  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:56 kubernetes-upgrade-171032 kubelet[13123]: E1031 17:19:56.857219   13123 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.878300  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:57 kubernetes-upgrade-171032 kubelet[13133]: E1031 17:19:57.617821   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.878711  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:58 kubernetes-upgrade-171032 kubelet[13144]: E1031 17:19:58.360514   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879065  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:59 kubernetes-upgrade-171032 kubelet[13155]: E1031 17:19:59.118905   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879415  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:59 kubernetes-upgrade-171032 kubelet[13166]: E1031 17:19:59.857691   13166 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879756  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:00 kubernetes-upgrade-171032 kubelet[13176]: E1031 17:20:00.611772   13176 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880126  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:01 kubernetes-upgrade-171032 kubelet[13186]: E1031 17:20:01.358451   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880481  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:02 kubernetes-upgrade-171032 kubelet[13196]: E1031 17:20:02.114080   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880822  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:02 kubernetes-upgrade-171032 kubelet[13206]: E1031 17:20:02.917959   13206 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881167  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:03 kubernetes-upgrade-171032 kubelet[13217]: E1031 17:20:03.619555   13217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881509  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:04 kubernetes-upgrade-171032 kubelet[13227]: E1031 17:20:04.356232   13227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881847  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:05 kubernetes-upgrade-171032 kubelet[13238]: E1031 17:20:05.109111   13238 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882186  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:05 kubernetes-upgrade-171032 kubelet[13250]: E1031 17:20:05.857299   13250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882538  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:06 kubernetes-upgrade-171032 kubelet[13261]: E1031 17:20:06.608161   13261 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882881  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:07 kubernetes-upgrade-171032 kubelet[13272]: E1031 17:20:07.357203   13272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:07.883015  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:20:07.883032  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:20:07.902215  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:20:07.902267  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:20:07.959431  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:20:07.959458  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:20:07.959470  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:20:08.022467  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:20:08.022507  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1031 17:20:08.053446  190637 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1031 17:20:08.053495  190637 out.go:239] * 
	* 
	W1031 17:20:08.053712  190637 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:20:08.053747  190637 out.go:239] * 
	* 
	W1031 17:20:08.054559  190637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:20:08.056952  190637 out.go:177] X Problems detected in kubelet:
	I1031 17:20:08.059276  190637 out.go:177]   Oct 31 17:19:17 kubernetes-upgrade-171032 kubelet[12551]: E1031 17:19:17.859999   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.060863  190637 out.go:177]   Oct 31 17:19:18 kubernetes-upgrade-171032 kubelet[12563]: E1031 17:19:18.615680   12563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.062503  190637 out.go:177]   Oct 31 17:19:19 kubernetes-upgrade-171032 kubelet[12573]: E1031 17:19:19.358228   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.066358  190637 out.go:177] 
	W1031 17:20:08.068100  190637 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:20:08.068251  190637 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1031 17:20:08.068324  190637 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1031 17:20:08.070131  190637 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-171032 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-171032 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-171032 version --output=json: exit status 1 (54.748247ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "25",
	    "gitVersion": "v1.25.3",
	    "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	    "gitTreeState": "clean",
	    "buildDate": "2022-10-12T10:57:26Z",
	    "goVersion": "go1.19.2",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-10-31 17:20:08.527572836 +0000 UTC m=+2651.303151704
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-171032
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-171032:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750",
	        "Created": "2022-10-31T17:10:46.654564381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 191015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-31T17:11:25.820950495Z",
	            "FinishedAt": "2022-10-31T17:11:23.739777818Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750/hosts",
	        "LogPath": "/var/lib/docker/containers/7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750/7a6b461d8d1707b931a75633f9f0e49ccea20371bebc58745824e281cc616750-json.log",
	        "Name": "/kubernetes-upgrade-171032",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-171032:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-171032",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0befa4849997ba0c3ded8b54a059298dcbe0603639234d89338f3b51bd154a-init/diff:/var/lib/docker/overlay2/850407c9352fc6d39f5a61f0f7868bc687359dfa2a9e604aacedd9e4180b6b24/diff:/var/lib/docker/overlay2/21aaafded5bd8cd556e28d44c5789deca54d553c1b7434f81407bd7fcd1957e2/diff:/var/lib/docker/overlay2/6092cf791661e4cab1851c6157178d18fd0167b1f47a6bebec580856fb033b44/diff:/var/lib/docker/overlay2/de1b6fab5ea890ce9ec3ab284acb657037d204cfa01fe082b7ab7fb1c0539f4a/diff:/var/lib/docker/overlay2/4ce8b04194bb323d53c06b240875a6203e31c8f7f41d68021a3a9c268299cbed/diff:/var/lib/docker/overlay2/efdd112bff28ec4eeb4274df5357bc6a943d954bf3bb5969c95a3f396318e5f2/diff:/var/lib/docker/overlay2/bf27ecc71ffb48aba0eb712986cbc98c99838dc8b04631580d9a9495f718f594/diff:/var/lib/docker/overlay2/448bbda6d5530c89aca7714db71b5eb84689a6dba7ac558086a7568817db54ae/diff:/var/lib/docker/overlay2/b43560491d25a8924ac5cae2ec4dc68deb89b0f8f1e1b7a720313dc4eeb82428/diff:/var/lib/docker/overlay2/2027e3
3b3f092c531efa1f98cabb990a64b3ff51978a38e4261ef8e82655e56d/diff:/var/lib/docker/overlay2/40d06c11aaa05bdf4d5349d7d00fdf7d8f962768ce49b8f03d4d2d5a23706a83/diff:/var/lib/docker/overlay2/3a1bdaf48ececa097bf7b4c7e715cdc5045b596a2cb2bf0d2d335363c91b7763/diff:/var/lib/docker/overlay2/a37c63314afa70bd7e634537d33bcefbffbbe9f43c8aa45d9d42bd58cc3b0cf8/diff:/var/lib/docker/overlay2/ff91a87ac6071b8ab64a547410e1499ce95011395ea036dd714d0dd5129adb37/diff:/var/lib/docker/overlay2/aefdb5f8ac62063ccf24e1bc21262559900c234b9c151acd755a4b834d51fea9/diff:/var/lib/docker/overlay2/915c92a89aba7500f1323ec1a9c9a53d856e818f9776d9f9ed08bf36936d3e4a/diff:/var/lib/docker/overlay2/52c13726cbf2ed741bd08a4fd814eca88e84b1d329661e62d858be944b3756fa/diff:/var/lib/docker/overlay2/459b8ced782783b6c14513346d3291aeaa7bf95628d52d5734ceb8e3dc2bb34a/diff:/var/lib/docker/overlay2/15b295bfa3bda6886453bc187c23d72b25ee63f5085ee0f7f33e1c16159f3458/diff:/var/lib/docker/overlay2/23b0f6d1317fd997d142b8b463d727f2337496dada67bd1d2d3b0e9e864b6c6b/diff:/var/lib/d
ocker/overlay2/5865c95ad7cd03f9b4844f71209de766041b054c00595d5aec780c06ae768435/diff:/var/lib/docker/overlay2/efa08e39c835181ac59410e6fa91805bdf6038812cf9de2fe6166b28ddbd0551/diff:/var/lib/docker/overlay2/e0b9a735c6e765ddbdea44d18a2b26b9b2c3db322dca7fbab94d6e76ab322d51/diff:/var/lib/docker/overlay2/5643dd6e2ea4886915404d641ac2a2f0327156d44c5cd2960ec0ce17a61bedb2/diff:/var/lib/docker/overlay2/4f789b09379fe08af21ac5ede6a916c169e328eac752d559ecde59f6f36263ea/diff:/var/lib/docker/overlay2/4fdd55958a1cbe05aa4c0d860e201090b87575a39b37ea9555600f8cb3c2256c/diff:/var/lib/docker/overlay2/db64f95c578859a9eb3b7bb1debcf894e5466441c4c6c27c9a3eae7247029669/diff:/var/lib/docker/overlay2/6ea16e3482414ff15bfc6317e5fb3463df41afc3fa76d7b22ef86e1a735fbf2d/diff:/var/lib/docker/overlay2/2141b9e79d9eca44b4934f0ab5e90e3a7a6326ad619ce3e981da60d3b9397952/diff:/var/lib/docker/overlay2/ed7d69a3a4de28360197cbde205a3c218b2c785ad29581c25ae9d74275fbc3af/diff:/var/lib/docker/overlay2/7a003859a39e8ad3bd9681a6e25c7687c68b45396a9bd9309f5f2fc5a6d
b937f/diff:/var/lib/docker/overlay2/9f343157cfc9dd91c334ef0927fcbdff9b1c543bc670a05b547ad650c42a9e4e/diff:/var/lib/docker/overlay2/1895e41d6462ac28032e1938f1c755f37d5063dbfcfce66c80a1bb5542592f87/diff:/var/lib/docker/overlay2/139059382b6f47a4d917321fc96bb88b4e4496bc6d72d5c140f22414932cd23a/diff:/var/lib/docker/overlay2/877f4b5fd322b19211f62544018b39a1fc4b920707d11dc957cac06f2232d4b5/diff:/var/lib/docker/overlay2/7f935ec11ddf890b56355eff56a25f995efb95fe3f8718078d517e5126fc40af/diff:/var/lib/docker/overlay2/f746de1e06eaa48a0ff284cbeec7e6f78c3eb97d1a90e020d82d10c2654236e7/diff:/var/lib/docker/overlay2/f58fee49407523fa2a2a815cfb285f088abd1fc7b3196c3c1a6b27a8cc1d4a3f/diff:/var/lib/docker/overlay2/2f9e685ccc40a5063568a58dc39e286eab6aa4fd66ad71614b75fb8082c6c201/diff:/var/lib/docker/overlay2/5d49dd0a636da4d0a250625e83cf665e98dba840590d94ac41b6f345e76aa187/diff:/var/lib/docker/overlay2/818cc610ded8dc62555773ef1e35bea879ef657b00a70e6c878f5424f518134a/diff:/var/lib/docker/overlay2/c98da52ad37a10af980b89a4e4ddd50b85ffa2
12a2847b428571f2544cb3eeb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0befa4849997ba0c3ded8b54a059298dcbe0603639234d89338f3b51bd154a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0befa4849997ba0c3ded8b54a059298dcbe0603639234d89338f3b51bd154a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0befa4849997ba0c3ded8b54a059298dcbe0603639234d89338f3b51bd154a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-171032",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-171032/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-171032",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-171032",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-171032",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0706ad3f8def9a2ad9de0e75ff37eb2e369005e2f363754dbc4530cf5902f9f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49372"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49371"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49368"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49370"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49369"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a0706ad3f8de",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-171032": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a6b461d8d17",
	                        "kubernetes-upgrade-171032"
	                    ],
	                    "NetworkID": "fa677ee5c17a19aa77e1eda0e758186257fa89c3935349368ed99bb0e5a1ed2d",
	                    "EndpointID": "b434b36d235f72ce62763c9c863c4bfdbab4b03e877ec7d63074ecb42cd8f235",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171032 -n kubernetes-upgrade-171032
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171032 -n kubernetes-upgrade-171032: exit status 2 (384.550088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171032 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:12 UTC | 31 Oct 22 17:12 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-171119             | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:12 UTC | 31 Oct 22 17:12 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:12 UTC | 31 Oct 22 17:17 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171107   | old-k8s-version-171107       | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | 31 Oct 22 17:13 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171107                         | old-k8s-version-171107       | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | 31 Oct 22 17:13 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171107        | old-k8s-version-171107       | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | 31 Oct 22 17:13 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171107                         | old-k8s-version-171107       | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-171023                         | cert-expiration-171023       | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:14 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-171023                         | cert-expiration-171023       | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:14 UTC |
	| start   | -p embed-certs-171419                             | embed-certs-171419           | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:15 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-171419       | embed-certs-171419           | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC | 31 Oct 22 17:15 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-171419                             | embed-certs-171419           | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC | 31 Oct 22 17:15 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-171419            | embed-certs-171419           | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC | 31 Oct 22 17:15 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-171419                             | embed-certs-171419           | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-171119 sudo                         | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	| delete  | -p no-preload-171119                              | no-preload-171119            | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	| delete  | -p                                                | disable-driver-mounts-171820 | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:18 UTC |
	|         | disable-driver-mounts-171820                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-171820 | jenkins | v1.27.1 | 31 Oct 22 17:18 UTC | 31 Oct 22 17:19 UTC |
	|         | default-k8s-diff-port-171820                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-171820 | jenkins | v1.27.1 | 31 Oct 22 17:19 UTC | 31 Oct 22 17:19 UTC |
	|         | default-k8s-diff-port-171820                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-171820 | jenkins | v1.27.1 | 31 Oct 22 17:19 UTC | 31 Oct 22 17:19 UTC |
	|         | default-k8s-diff-port-171820                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171820  | default-k8s-diff-port-171820 | jenkins | v1.27.1 | 31 Oct 22 17:19 UTC | 31 Oct 22 17:19 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-171820 | jenkins | v1.27.1 | 31 Oct 22 17:19 UTC |                     |
	|         | default-k8s-diff-port-171820                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 17:19:34
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:19:34.354857  236821 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:19:34.354989  236821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:19:34.355000  236821 out.go:309] Setting ErrFile to fd 2...
	I1031 17:19:34.355005  236821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:19:34.355160  236821 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:19:34.355823  236821 out.go:303] Setting JSON to false
	I1031 17:19:34.358128  236821 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3724,"bootTime":1667233050,"procs":1059,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:19:34.358209  236821 start.go:126] virtualization: kvm guest
	I1031 17:19:34.361057  236821 out.go:177] * [default-k8s-diff-port-171820] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:19:34.363081  236821 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:19:34.363018  236821 notify.go:220] Checking for updates...
	I1031 17:19:34.366287  236821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:19:34.367902  236821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:19:34.369594  236821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:19:34.371450  236821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:19:34.373472  236821 config.go:180] Loaded profile config "default-k8s-diff-port-171820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:19:34.374118  236821 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:19:34.407501  236821 docker.go:137] docker version: linux-20.10.21
	I1031 17:19:34.407629  236821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:19:34.515501  236821 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-10-31 17:19:34.430937757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:19:34.515605  236821 docker.go:254] overlay module found
	I1031 17:19:29.972275  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:30.472281  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:30.971779  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:31.471702  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:31.972306  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:32.472032  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:32.971595  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:33.471605  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:33.972022  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:34.471531  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:34.517818  236821 out.go:177] * Using the docker driver based on existing profile
	I1031 17:19:34.519682  236821 start.go:282] selected driver: docker
	I1031 17:19:34.519707  236821 start.go:808] validating driver "docker" against &{Name:default-k8s-diff-port-171820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-171820 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:19:34.519815  236821 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:19:34.520786  236821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:19:34.627199  236821 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-10-31 17:19:34.545355366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:19:34.627486  236821 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:19:34.627507  236821 cni.go:95] Creating CNI manager for ""
	I1031 17:19:34.627513  236821 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:19:34.627523  236821 start_flags.go:317] config:
	{Name:default-k8s-diff-port-171820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-171820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:19:34.630635  236821 out.go:177] * Starting control plane node default-k8s-diff-port-171820 in cluster default-k8s-diff-port-171820
	I1031 17:19:34.632114  236821 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 17:19:34.633650  236821 out.go:177] * Pulling base image ...
	I1031 17:19:34.635073  236821 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:19:34.635124  236821 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 17:19:34.635134  236821 cache.go:57] Caching tarball of preloaded images
	I1031 17:19:34.635157  236821 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 17:19:34.635374  236821 preload.go:174] Found /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:19:34.635389  236821 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1031 17:19:34.635508  236821 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/config.json ...
	I1031 17:19:34.660677  236821 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1031 17:19:34.660702  236821 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1031 17:19:34.660711  236821 cache.go:208] Successfully downloaded all kic artifacts
	I1031 17:19:34.660740  236821 start.go:364] acquiring machines lock for default-k8s-diff-port-171820: {Name:mkcefc349b46895b68e1e841a1c4e1cae6d03286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:19:34.660828  236821 start.go:368] acquired machines lock for "default-k8s-diff-port-171820" in 68.591µs
	I1031 17:19:34.660856  236821 start.go:96] Skipping create...Using existing machine configuration
	I1031 17:19:34.660862  236821 fix.go:55] fixHost starting: 
	I1031 17:19:34.661125  236821 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-171820 --format={{.State.Status}}
	I1031 17:19:34.687514  236821 fix.go:103] recreateIfNeeded on default-k8s-diff-port-171820: state=Stopped err=<nil>
	W1031 17:19:34.687557  236821 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 17:19:34.690189  236821 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-171820" ...
	I1031 17:19:30.804580  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:32.804667  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:34.804764  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:34.691704  236821 cli_runner.go:164] Run: docker start default-k8s-diff-port-171820
	I1031 17:19:35.090471  236821 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-171820 --format={{.State.Status}}
	I1031 17:19:35.124057  236821 kic.go:415] container "default-k8s-diff-port-171820" state is running.
	I1031 17:19:35.124489  236821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-171820
	I1031 17:19:35.151221  236821 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/config.json ...
	I1031 17:19:35.151434  236821 machine.go:88] provisioning docker machine ...
	I1031 17:19:35.151460  236821 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-171820"
	I1031 17:19:35.151508  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:35.177862  236821 main.go:134] libmachine: Using SSH client type: native
	I1031 17:19:35.178087  236821 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I1031 17:19:35.178127  236821 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171820 && echo "default-k8s-diff-port-171820" | sudo tee /etc/hostname
	I1031 17:19:35.178748  236821 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60384->127.0.0.1:49402: read: connection reset by peer
	I1031 17:19:38.305951  236821 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171820
	
	I1031 17:19:38.306022  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:38.331134  236821 main.go:134] libmachine: Using SSH client type: native
	I1031 17:19:38.331325  236821 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I1031 17:19:38.331354  236821 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171820/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:19:38.447807  236821 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:19:38.447834  236821 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
	I1031 17:19:38.447850  236821 ubuntu.go:177] setting up certificates
	I1031 17:19:38.447860  236821 provision.go:83] configureAuth start
	I1031 17:19:38.447911  236821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-171820
	I1031 17:19:38.473687  236821 provision.go:138] copyHostCerts
	I1031 17:19:38.473744  236821 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
	I1031 17:19:38.473763  236821 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
	I1031 17:19:38.473833  236821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
	I1031 17:19:38.473910  236821 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
	I1031 17:19:38.473922  236821 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
	I1031 17:19:38.473955  236821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
	I1031 17:19:38.474005  236821 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
	I1031 17:19:38.474013  236821 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
	I1031 17:19:38.474038  236821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
	I1031 17:19:38.474080  236821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171820 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-171820]
	I1031 17:19:38.745809  236821 provision.go:172] copyRemoteCerts
	I1031 17:19:38.745869  236821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:19:38.745906  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:38.771747  236821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/default-k8s-diff-port-171820/id_rsa Username:docker}
	I1031 17:19:38.856554  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:19:38.875739  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 17:19:38.894054  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 17:19:38.914247  236821 provision.go:86] duration metric: configureAuth took 466.37407ms
	I1031 17:19:38.914283  236821 ubuntu.go:193] setting minikube options for container-runtime
	I1031 17:19:38.914507  236821 config.go:180] Loaded profile config "default-k8s-diff-port-171820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:19:38.914526  236821 machine.go:91] provisioned docker machine in 3.763074327s
	I1031 17:19:38.914536  236821 start.go:300] post-start starting for "default-k8s-diff-port-171820" (driver="docker")
	I1031 17:19:38.914545  236821 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:19:38.914601  236821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:19:38.914648  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:38.940365  236821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/default-k8s-diff-port-171820/id_rsa Username:docker}
	I1031 17:19:39.028638  236821 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:19:39.031649  236821 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1031 17:19:39.031673  236821 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1031 17:19:39.031681  236821 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1031 17:19:39.031686  236821 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1031 17:19:39.031695  236821 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
	I1031 17:19:39.031741  236821 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
	I1031 17:19:39.031802  236821 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
	I1031 17:19:39.031883  236821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:19:39.039526  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:19:39.056792  236821 start.go:303] post-start completed in 142.239398ms
	I1031 17:19:39.056879  236821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 17:19:39.056922  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:39.083590  236821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/default-k8s-diff-port-171820/id_rsa Username:docker}
	I1031 17:19:39.164860  236821 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1031 17:19:39.168891  236821 fix.go:57] fixHost completed within 4.508022906s
	I1031 17:19:39.168920  236821 start.go:83] releasing machines lock for "default-k8s-diff-port-171820", held for 4.508080192s
	I1031 17:19:39.168997  236821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-171820
	I1031 17:19:39.194720  236821 ssh_runner.go:195] Run: systemctl --version
	I1031 17:19:39.194777  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:39.194824  236821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:19:39.194890  236821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-171820
	I1031 17:19:39.221239  236821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/default-k8s-diff-port-171820/id_rsa Username:docker}
	I1031 17:19:39.221876  236821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/default-k8s-diff-port-171820/id_rsa Username:docker}
	I1031 17:19:39.336530  236821 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:19:39.348186  236821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:19:34.972060  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:35.472321  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:35.972044  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:36.471491  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:36.972402  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:37.471866  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:37.972388  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:38.472294  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:38.971385  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:39.471689  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:37.304453  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:39.305325  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:39.971746  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:40.471493  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:40.971586  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:41.471712  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:41.972287  202010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:19:42.171946  202010 kubeadm.go:1067] duration metric: took 14.45513464s to wait for elevateKubeSystemPrivileges.
	I1031 17:19:42.171979  202010 kubeadm.go:398] StartCluster complete in 5m40.742325884s
	I1031 17:19:42.171999  202010 settings.go:142] acquiring lock: {Name:mk815a86086a5a2f83362177da735ab9253065a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:19:42.172179  202010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:19:42.173146  202010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/kubeconfig: {Name:mkbe3dcb9ce3e3942a7be44b5e867e137f1872a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:19:42.690485  202010 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-171107" rescaled to 1
	I1031 17:19:42.690539  202010 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1031 17:19:42.693132  202010 out.go:177] * Verifying Kubernetes components...
	I1031 17:19:42.690591  202010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:19:42.690603  202010 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1031 17:19:42.690793  202010 config.go:180] Loaded profile config "old-k8s-version-171107": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1031 17:19:42.694576  202010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:19:42.694629  202010 addons.go:65] Setting dashboard=true in profile "old-k8s-version-171107"
	I1031 17:19:42.694644  202010 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-171107"
	I1031 17:19:42.694654  202010 addons.go:153] Setting addon dashboard=true in "old-k8s-version-171107"
	I1031 17:19:42.694657  202010 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-171107"
	I1031 17:19:42.694670  202010 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-171107"
	I1031 17:19:42.694683  202010 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-171107"
	W1031 17:19:42.694693  202010 addons.go:162] addon storage-provisioner should already be in state true
	I1031 17:19:42.694685  202010 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-171107"
	I1031 17:19:42.694736  202010 host.go:66] Checking if "old-k8s-version-171107" exists ...
	W1031 17:19:42.694749  202010 addons.go:162] addon metrics-server should already be in state true
	I1031 17:19:42.694798  202010 host.go:66] Checking if "old-k8s-version-171107" exists ...
	W1031 17:19:42.694663  202010 addons.go:162] addon dashboard should already be in state true
	I1031 17:19:42.694887  202010 host.go:66] Checking if "old-k8s-version-171107" exists ...
	I1031 17:19:42.694683  202010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-171107"
	I1031 17:19:42.695144  202010 cli_runner.go:164] Run: docker container inspect old-k8s-version-171107 --format={{.State.Status}}
	I1031 17:19:42.695155  202010 cli_runner.go:164] Run: docker container inspect old-k8s-version-171107 --format={{.State.Status}}
	I1031 17:19:42.695227  202010 cli_runner.go:164] Run: docker container inspect old-k8s-version-171107 --format={{.State.Status}}
	I1031 17:19:42.695265  202010 cli_runner.go:164] Run: docker container inspect old-k8s-version-171107 --format={{.State.Status}}
	I1031 17:19:42.732732  202010 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1031 17:19:42.734403  202010 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:19:42.735971  202010 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:19:42.735997  202010 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 17:19:42.736003  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:19:42.736012  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 17:19:42.736056  202010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171107
	I1031 17:19:42.736058  202010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171107
	I1031 17:19:42.735974  202010 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1031 17:19:42.738668  202010 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-171107"
	W1031 17:19:42.741156  202010 addons.go:162] addon default-storageclass should already be in state true
	I1031 17:19:42.742830  202010 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1031 17:19:39.359009  236821 docker.go:189] disabling docker service ...
	I1031 17:19:39.359057  236821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 17:19:39.369087  236821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 17:19:39.378587  236821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 17:19:39.453971  236821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 17:19:39.529144  236821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 17:19:39.539955  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:19:39.553493  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1031 17:19:39.561952  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 17:19:39.570095  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 17:19:39.578365  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1031 17:19:39.587128  236821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:19:39.594186  236821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:19:39.601542  236821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:19:39.679973  236821 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:19:39.749225  236821 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 17:19:39.749326  236821 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 17:19:39.752753  236821 start.go:472] Will wait 60s for crictl version
	I1031 17:19:39.752811  236821 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:19:39.779813  236821 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T17:19:39Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1031 17:19:42.741208  202010 host.go:66] Checking if "old-k8s-version-171107" exists ...
	I1031 17:19:42.744257  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1031 17:19:42.744278  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1031 17:19:42.744337  202010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171107
	I1031 17:19:42.744674  202010 cli_runner.go:164] Run: docker container inspect old-k8s-version-171107 --format={{.State.Status}}
	I1031 17:19:42.771263  202010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/old-k8s-version-171107/id_rsa Username:docker}
	I1031 17:19:42.774536  202010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/old-k8s-version-171107/id_rsa Username:docker}
	I1031 17:19:42.789256  202010 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:19:42.789281  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:19:42.789333  202010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171107
	I1031 17:19:42.798548  202010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/old-k8s-version-171107/id_rsa Username:docker}
	I1031 17:19:42.808672  202010 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-171107" to be "Ready" ...
	I1031 17:19:42.808747  202010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:19:42.811468  202010 node_ready.go:49] node "old-k8s-version-171107" has status "Ready":"True"
	I1031 17:19:42.811485  202010 node_ready.go:38] duration metric: took 2.783051ms waiting for node "old-k8s-version-171107" to be "Ready" ...
	I1031 17:19:42.811496  202010 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:19:42.823677  202010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/old-k8s-version-171107/id_rsa Username:docker}
	I1031 17:19:42.851536  202010 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace to be "Ready" ...
	I1031 17:19:42.966342  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1031 17:19:42.966373  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1031 17:19:42.966551  202010 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 17:19:42.966572  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1031 17:19:42.968172  202010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:19:43.060621  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1031 17:19:43.060645  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1031 17:19:43.063926  202010 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 17:19:43.063968  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 17:19:43.148953  202010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:19:43.162436  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1031 17:19:43.162468  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1031 17:19:43.168660  202010 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 17:19:43.168693  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 17:19:43.255789  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1031 17:19:43.255820  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1031 17:19:43.263557  202010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 17:19:43.350017  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1031 17:19:43.350099  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1031 17:19:43.447697  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1031 17:19:43.447725  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1031 17:19:43.547543  202010 start.go:826] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I1031 17:19:43.548291  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1031 17:19:43.548316  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1031 17:19:43.647585  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1031 17:19:43.647619  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1031 17:19:43.672578  202010 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1031 17:19:43.672614  202010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1031 17:19:43.848696  202010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1031 17:19:44.456360  202010 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.192752384s)
	I1031 17:19:44.456406  202010 addons.go:383] Verifying addon metrics-server=true in "old-k8s-version-171107"
	I1031 17:19:41.804291  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:44.305261  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:44.866252  202010 pod_ready.go:102] pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:45.160555  202010 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.311737323s)
	I1031 17:19:45.163259  202010 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1031 17:19:45.164651  202010 addons.go:414] enableAddons completed in 2.474039841s
	I1031 17:19:47.360265  202010 pod_ready.go:102] pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:49.361516  202010 pod_ready.go:102] pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:46.804502  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:49.305883  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:50.827066  236821 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:19:50.856202  236821 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1031 17:19:50.856270  236821 ssh_runner.go:195] Run: containerd --version
	I1031 17:19:50.886800  236821 ssh_runner.go:195] Run: containerd --version
	I1031 17:19:50.920311  236821 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1031 17:19:50.922072  236821 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-171820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:19:50.947211  236821 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1031 17:19:50.951270  236821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:19:50.963154  236821 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:19:50.963227  236821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:19:50.991770  236821 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 17:19:50.991804  236821 containerd.go:467] Images already preloaded, skipping extraction
	I1031 17:19:50.991862  236821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:19:51.018484  236821 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 17:19:51.018515  236821 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:19:51.018619  236821 ssh_runner.go:195] Run: sudo crictl info
	I1031 17:19:51.045956  236821 cni.go:95] Creating CNI manager for ""
	I1031 17:19:51.045987  236821 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:19:51.046000  236821 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:19:51.046017  236821 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171820 NodeName:default-k8s-diff-port-171820 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 17:19:51.046190  236821 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-171820"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:19:51.046300  236821 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-171820 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-171820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 17:19:51.046354  236821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1031 17:19:51.054936  236821 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:19:51.055008  236821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:19:51.063596  236821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I1031 17:19:51.080155  236821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:19:51.097746  236821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I1031 17:19:51.114500  236821 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1031 17:19:51.118194  236821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:19:51.128237  236821 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820 for IP: 192.168.67.2
	I1031 17:19:51.128396  236821 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
	I1031 17:19:51.128460  236821 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
	I1031 17:19:51.128554  236821 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/client.key
	I1031 17:19:51.128645  236821 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/apiserver.key.c7fa3a9e
	I1031 17:19:51.128699  236821 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/proxy-client.key
	I1031 17:19:51.128811  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
	W1031 17:19:51.128855  236821 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
	I1031 17:19:51.128871  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:19:51.128905  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
	I1031 17:19:51.128936  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:19:51.128971  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
	I1031 17:19:51.129020  236821 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:19:51.129618  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:19:51.148291  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 17:19:51.169522  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:19:51.191601  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/default-k8s-diff-port-171820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:19:51.212512  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:19:51.234121  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:19:51.253962  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:19:51.276613  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:19:51.297556  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
	I1031 17:19:51.317718  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
	I1031 17:19:51.341445  236821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:19:51.363232  236821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 17:19:51.377147  236821 ssh_runner.go:195] Run: openssl version
	I1031 17:19:51.382271  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
	I1031 17:19:51.392146  236821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
	I1031 17:19:51.395450  236821 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
	I1031 17:19:51.395517  236821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
	I1031 17:19:51.400750  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
	I1031 17:19:51.409168  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
	I1031 17:19:51.417947  236821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
	I1031 17:19:51.422007  236821 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
	I1031 17:19:51.422070  236821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
	I1031 17:19:51.427423  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:19:51.435585  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:19:51.445592  236821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:19:51.449053  236821 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:19:51.449114  236821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:19:51.454172  236821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:19:51.461793  236821 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-171820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-171820 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:19:51.461897  236821 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 17:19:51.461937  236821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:19:51.488185  236821 cri.go:87] found id: "2967726af0bc8cf50ec31d4d21d4d79f945e436c957b8611a0ffa8622c63cea3"
	I1031 17:19:51.488222  236821 cri.go:87] found id: "f1519c268236888ec4ed71a1f44888905edb30225913d0a84b8ba1f2eaad3d4f"
	I1031 17:19:51.488234  236821 cri.go:87] found id: "f9badc6b5b92200d33e1bf4e6000fba7ac9ae2e5ab7e9e15b10f93766159e32d"
	I1031 17:19:51.488245  236821 cri.go:87] found id: "cac85f4c19f93de642f6ede090abd5973f16ac22c1d7f63077a00daa5d31bec0"
	I1031 17:19:51.488255  236821 cri.go:87] found id: "ceb3bb5665b82a257420815cd5660a5bc07417181f71d240942f3c5e0d11550f"
	I1031 17:19:51.488266  236821 cri.go:87] found id: "e4119e9f22390ca513f1e79d0631af0db0eadbb15260dafd2a351a6e4dcf0aff"
	I1031 17:19:51.488281  236821 cri.go:87] found id: "726a4ebbf4c163f66ffd1fca45e8b49e6809635cf14135d063e9d969ab5e90f7"
	I1031 17:19:51.488295  236821 cri.go:87] found id: "8106e3b911e189ba7cd3a4abb6e50671458bd82dcfa2018a48c7f2709294252c"
	I1031 17:19:51.488309  236821 cri.go:87] found id: ""
	I1031 17:19:51.488357  236821 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1031 17:19:51.500718  236821 cri.go:114] JSON = null
	W1031 17:19:51.500769  236821 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1031 17:19:51.500828  236821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:19:51.507964  236821 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1031 17:19:51.507990  236821 kubeadm.go:627] restartCluster start
	I1031 17:19:51.508035  236821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 17:19:51.514651  236821 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:51.515371  236821 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-171820" does not appear in /home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:19:51.515751  236821 kubeconfig.go:146] "default-k8s-diff-port-171820" context is missing from /home/jenkins/minikube-integration/15232-3650/kubeconfig - will repair!
	I1031 17:19:51.517713  236821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/kubeconfig: {Name:mkbe3dcb9ce3e3942a7be44b5e867e137f1872a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:19:51.519742  236821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 17:19:51.527026  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:51.527074  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:51.535758  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:51.736181  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:51.736284  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:51.745640  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:51.935841  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:51.935935  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:51.945460  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:52.136780  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:52.136867  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:52.145716  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:52.335935  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:52.336030  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:52.345183  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:52.536417  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:52.536501  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:52.545690  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:52.735969  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:52.736063  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:52.745050  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:52.936327  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:52.936421  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:52.945452  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:53.136807  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:53.136890  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:53.146377  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:53.336442  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:53.336528  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:53.346467  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:53.536735  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:53.536816  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:53.546003  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:53.736325  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:53.736407  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:53.745383  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:53.936633  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:53.936737  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:53.945476  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.136648  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:54.136718  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:54.145790  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.336192  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:54.336283  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:54.345366  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:51.860300  202010 pod_ready.go:102] pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:53.860744  202010 pod_ready.go:92] pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace has status "Ready":"True"
	I1031 17:19:53.860772  202010 pod_ready.go:81] duration metric: took 11.009200902s waiting for pod "coredns-5644d7b6d9-bsxlg" in "kube-system" namespace to be "Ready" ...
	I1031 17:19:53.860788  202010 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njxrx" in "kube-system" namespace to be "Ready" ...
	I1031 17:19:53.865503  202010 pod_ready.go:92] pod "kube-proxy-njxrx" in "kube-system" namespace has status "Ready":"True"
	I1031 17:19:53.865525  202010 pod_ready.go:81] duration metric: took 4.729408ms waiting for pod "kube-proxy-njxrx" in "kube-system" namespace to be "Ready" ...
	I1031 17:19:53.865534  202010 pod_ready.go:38] duration metric: took 11.05402794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:19:53.865558  202010 api_server.go:51] waiting for apiserver process to appear ...
	I1031 17:19:53.865606  202010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:19:53.877123  202010 api_server.go:71] duration metric: took 11.186559392s to wait for apiserver process to appear ...
	I1031 17:19:53.877157  202010 api_server.go:87] waiting for apiserver healthz status ...
	I1031 17:19:53.877171  202010 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1031 17:19:53.882203  202010 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1031 17:19:53.883130  202010 api_server.go:140] control plane version: v1.16.0
	I1031 17:19:53.883152  202010 api_server.go:130] duration metric: took 5.987507ms to wait for apiserver health ...
	I1031 17:19:53.883160  202010 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:19:53.886809  202010 system_pods.go:59] 5 kube-system pods found
	I1031 17:19:53.886835  202010 system_pods.go:61] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:53.886843  202010 system_pods.go:61] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:53.886850  202010 system_pods.go:61] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:53.886860  202010 system_pods.go:61] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:53.886868  202010 system_pods.go:61] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:53.886880  202010 system_pods.go:74] duration metric: took 3.713626ms to wait for pod list to return data ...
	I1031 17:19:53.886888  202010 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:19:53.889213  202010 default_sa.go:45] found service account: "default"
	I1031 17:19:53.889242  202010 default_sa.go:55] duration metric: took 2.339002ms for default service account to be created ...
	I1031 17:19:53.889251  202010 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:19:53.892780  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:53.892808  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:53.892816  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:53.892826  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:53.892850  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:53.892864  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:53.892886  202010 retry.go:31] will retry after 227.257272ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:54.125449  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:54.125491  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:54.125500  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:54.125507  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:54.125523  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:54.125536  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:54.125561  202010 retry.go:31] will retry after 307.639038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:54.450719  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:54.450750  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:54.450755  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:54.450759  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:54.450766  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:54.450772  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:54.450790  202010 retry.go:31] will retry after 348.248857ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:51.804706  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:53.804858  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:54.536830  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:54.536910  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:54.545431  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.545457  236821 api_server.go:165] Checking apiserver status ...
	I1031 17:19:54.545494  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 17:19:54.554934  236821 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.554965  236821 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1031 17:19:54.554973  236821 kubeadm.go:1114] stopping kube-system containers ...
	I1031 17:19:54.554988  236821 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1031 17:19:54.555037  236821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:19:54.586661  236821 cri.go:87] found id: "2967726af0bc8cf50ec31d4d21d4d79f945e436c957b8611a0ffa8622c63cea3"
	I1031 17:19:54.586695  236821 cri.go:87] found id: "f1519c268236888ec4ed71a1f44888905edb30225913d0a84b8ba1f2eaad3d4f"
	I1031 17:19:54.586706  236821 cri.go:87] found id: "f9badc6b5b92200d33e1bf4e6000fba7ac9ae2e5ab7e9e15b10f93766159e32d"
	I1031 17:19:54.586715  236821 cri.go:87] found id: "cac85f4c19f93de642f6ede090abd5973f16ac22c1d7f63077a00daa5d31bec0"
	I1031 17:19:54.586724  236821 cri.go:87] found id: "ceb3bb5665b82a257420815cd5660a5bc07417181f71d240942f3c5e0d11550f"
	I1031 17:19:54.586734  236821 cri.go:87] found id: "e4119e9f22390ca513f1e79d0631af0db0eadbb15260dafd2a351a6e4dcf0aff"
	I1031 17:19:54.586753  236821 cri.go:87] found id: "726a4ebbf4c163f66ffd1fca45e8b49e6809635cf14135d063e9d969ab5e90f7"
	I1031 17:19:54.586768  236821 cri.go:87] found id: "8106e3b911e189ba7cd3a4abb6e50671458bd82dcfa2018a48c7f2709294252c"
	I1031 17:19:54.586785  236821 cri.go:87] found id: ""
	I1031 17:19:54.586796  236821 cri.go:232] Stopping containers: [2967726af0bc8cf50ec31d4d21d4d79f945e436c957b8611a0ffa8622c63cea3 f1519c268236888ec4ed71a1f44888905edb30225913d0a84b8ba1f2eaad3d4f f9badc6b5b92200d33e1bf4e6000fba7ac9ae2e5ab7e9e15b10f93766159e32d cac85f4c19f93de642f6ede090abd5973f16ac22c1d7f63077a00daa5d31bec0 ceb3bb5665b82a257420815cd5660a5bc07417181f71d240942f3c5e0d11550f e4119e9f22390ca513f1e79d0631af0db0eadbb15260dafd2a351a6e4dcf0aff 726a4ebbf4c163f66ffd1fca45e8b49e6809635cf14135d063e9d969ab5e90f7 8106e3b911e189ba7cd3a4abb6e50671458bd82dcfa2018a48c7f2709294252c]
	I1031 17:19:54.586846  236821 ssh_runner.go:195] Run: which crictl
	I1031 17:19:54.591229  236821 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 2967726af0bc8cf50ec31d4d21d4d79f945e436c957b8611a0ffa8622c63cea3 f1519c268236888ec4ed71a1f44888905edb30225913d0a84b8ba1f2eaad3d4f f9badc6b5b92200d33e1bf4e6000fba7ac9ae2e5ab7e9e15b10f93766159e32d cac85f4c19f93de642f6ede090abd5973f16ac22c1d7f63077a00daa5d31bec0 ceb3bb5665b82a257420815cd5660a5bc07417181f71d240942f3c5e0d11550f e4119e9f22390ca513f1e79d0631af0db0eadbb15260dafd2a351a6e4dcf0aff 726a4ebbf4c163f66ffd1fca45e8b49e6809635cf14135d063e9d969ab5e90f7 8106e3b911e189ba7cd3a4abb6e50671458bd82dcfa2018a48c7f2709294252c
	I1031 17:19:54.625082  236821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 17:19:54.636682  236821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:19:54.644112  236821 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 31 17:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 31 17:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Oct 31 17:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 31 17:18 /etc/kubernetes/scheduler.conf
	
	I1031 17:19:54.644181  236821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1031 17:19:54.651385  236821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1031 17:19:54.660559  236821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1031 17:19:54.668690  236821 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.668753  236821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1031 17:19:54.676018  236821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1031 17:19:54.684222  236821 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 17:19:54.684289  236821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1031 17:19:54.692208  236821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:19:54.700165  236821 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 17:19:54.700191  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:19:54.747529  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:19:56.233950  236821 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.48638013s)
	I1031 17:19:56.233994  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:19:56.392180  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:19:56.448035  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:19:56.560049  236821 api_server.go:51] waiting for apiserver process to appear ...
	I1031 17:19:56.560148  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:19:57.069982  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:19:57.569764  236821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:19:57.583020  236821 api_server.go:71] duration metric: took 1.022977789s to wait for apiserver process to appear ...
	I1031 17:19:57.583051  236821 api_server.go:87] waiting for apiserver healthz status ...
	I1031 17:19:57.583064  236821 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I1031 17:19:57.583417  236821 api_server.go:268] stopped: https://192.168.67.2:8444/healthz: Get "https://192.168.67.2:8444/healthz": dial tcp 192.168.67.2:8444: connect: connection refused
	I1031 17:19:58.084176  236821 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I1031 17:19:54.803881  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:54.803915  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:54.803924  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:54.803930  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:54.803941  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:54.803948  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:54.803964  202010 retry.go:31] will retry after 437.769008ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:55.249748  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:55.249787  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:55.249796  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:55.249802  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:55.249814  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:55.249821  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:55.249840  202010 retry.go:31] will retry after 665.003868ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:55.944427  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:55.944467  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:55.944476  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:55.944482  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:55.944491  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:55.944500  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:55.944517  202010 retry.go:31] will retry after 655.575962ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:56.604395  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:56.604430  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:56.604437  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:56.604445  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:56.604456  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:56.604465  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:56.604484  202010 retry.go:31] will retry after 812.142789ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:57.420308  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:57.420335  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:57.420341  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:57.420346  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:57.420353  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:57.420358  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:57.420371  202010 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:58.533716  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:19:58.533745  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:19:58.533751  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:19:58.533755  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:19:58.533762  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:19:58.533767  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:19:58.533783  202010 retry.go:31] will retry after 1.54277181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:19:55.805120  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:19:58.305449  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:20:00.631488  236821 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 17:20:00.631519  236821 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 17:20:01.084194  236821 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I1031 17:20:01.088912  236821 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1031 17:20:01.088942  236821 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1031 17:20:01.583697  236821 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I1031 17:20:01.588555  236821 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1031 17:20:01.588578  236821 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1031 17:20:02.084173  236821 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I1031 17:20:02.089045  236821 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I1031 17:20:02.096699  236821 api_server.go:140] control plane version: v1.25.3
	I1031 17:20:02.096732  236821 api_server.go:130] duration metric: took 4.513673976s to wait for apiserver health ...
	I1031 17:20:02.096744  236821 cni.go:95] Creating CNI manager for ""
	I1031 17:20:02.096752  236821 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 17:20:02.099404  236821 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:20:02.100915  236821 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:20:02.105161  236821 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1031 17:20:02.105183  236821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1031 17:20:02.120553  236821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:20:03.297239  236821 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.176644265s)
	I1031 17:20:03.297278  236821 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:20:03.306921  236821 system_pods.go:59] 9 kube-system pods found
	I1031 17:20:03.306949  236821 system_pods.go:61] "coredns-565d847f94-9wvmp" [1fd88277-b14a-4947-a1de-5064f9b720d2] Running
	I1031 17:20:03.306958  236821 system_pods.go:61] "etcd-default-k8s-diff-port-171820" [29b1545a-a940-42ce-83a3-f4257f6078b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 17:20:03.306963  236821 system_pods.go:61] "kindnet-vq4dd" [0b8f321d-00c0-4c14-ad34-01200a789cba] Running
	I1031 17:20:03.306970  236821 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171820" [527d8f44-7371-4e58-9120-f9be27010bfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 17:20:03.306984  236821 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171820" [d048c279-7a27-4db4-86f8-0b96d9716525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 17:20:03.306992  236821 system_pods.go:61] "kube-proxy-xs6hb" [a31c386a-1d8f-45a9-8bbd-030df07f67a3] Running
	I1031 17:20:03.307003  236821 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171820" [972e2058-734b-4a5c-8d35-aa50103bdc23] Running
	I1031 17:20:03.307012  236821 system_pods.go:61] "metrics-server-5c8fd5cf8-d8j9c" [a2f9e37b-e11b-4768-936c-adde18f8f41f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:20:03.307021  236821 system_pods.go:61] "storage-provisioner" [9b2d59db-3545-4484-adc7-8044f70b1eea] Running
	I1031 17:20:03.307027  236821 system_pods.go:74] duration metric: took 9.743352ms to wait for pod list to return data ...
	I1031 17:20:03.307037  236821 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:20:03.351104  236821 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1031 17:20:03.351136  236821 node_conditions.go:123] node cpu capacity is 8
	I1031 17:20:03.351148  236821 node_conditions.go:105] duration metric: took 44.103264ms to run NodePressure ...
	I1031 17:20:03.351163  236821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 17:20:03.665325  236821 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1031 17:20:03.670380  236821 kubeadm.go:778] kubelet initialised
	I1031 17:20:03.670411  236821 kubeadm.go:779] duration metric: took 5.056788ms waiting for restarted kubelet to initialise ...
	I1031 17:20:03.670423  236821 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:20:03.676750  236821 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-9wvmp" in "kube-system" namespace to be "Ready" ...
	I1031 17:20:03.681989  236821 pod_ready.go:92] pod "coredns-565d847f94-9wvmp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:20:03.682016  236821 pod_ready.go:81] duration metric: took 5.238978ms waiting for pod "coredns-565d847f94-9wvmp" in "kube-system" namespace to be "Ready" ...
	I1031 17:20:03.682030  236821 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171820" in "kube-system" namespace to be "Ready" ...
	I1031 17:20:00.080796  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:20:00.080827  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:20:00.080835  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:20:00.080841  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:20:00.080850  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:20:00.080858  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:20:00.080876  202010 retry.go:31] will retry after 2.200241603s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:20:02.286176  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:20:02.286207  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:20:02.286212  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:20:02.286217  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:20:02.286224  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:20:02.286229  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:20:02.286243  202010 retry.go:31] will retry after 2.087459713s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:20:04.378515  202010 system_pods.go:86] 5 kube-system pods found
	I1031 17:20:04.378548  202010 system_pods.go:89] "coredns-5644d7b6d9-bsxlg" [42760d99-9218-4ee3-b545-85fdb270ce43] Running
	I1031 17:20:04.378554  202010 system_pods.go:89] "kindnet-7z5ml" [51b384c6-4f85-48f1-8a98-73ae18164702] Running
	I1031 17:20:04.378559  202010 system_pods.go:89] "kube-proxy-njxrx" [5058a255-7edc-441f-920e-6878d9d42b8c] Running
	I1031 17:20:04.378566  202010 system_pods.go:89] "metrics-server-7958775c-g5kmm" [7ed52b3d-c54c-40af-9c17-d7de524748d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 17:20:04.378571  202010 system_pods.go:89] "storage-provisioner" [f8347c99-1fcc-46e1-8bf6-7b82db686df7] Running
	I1031 17:20:04.378586  202010 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 17:20:00.804670  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:20:02.805115  213905 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lv6kt" in "kube-system" namespace has status "Ready":"False"
	I1031 17:20:07.628683  190637 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1031 17:20:07.628909  190637 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1031 17:20:07.628946  190637 kubeadm.go:317] 
	I1031 17:20:07.629024  190637 kubeadm.go:317] Unfortunately, an error has occurred:
	I1031 17:20:07.629097  190637 kubeadm.go:317] 	timed out waiting for the condition
	I1031 17:20:07.629108  190637 kubeadm.go:317] 
	I1031 17:20:07.629163  190637 kubeadm.go:317] This error is likely caused by:
	I1031 17:20:07.629211  190637 kubeadm.go:317] 	- The kubelet is not running
	I1031 17:20:07.629344  190637 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1031 17:20:07.629353  190637 kubeadm.go:317] 
	I1031 17:20:07.629481  190637 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1031 17:20:07.629536  190637 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1031 17:20:07.629576  190637 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1031 17:20:07.629586  190637 kubeadm.go:317] 
	I1031 17:20:07.629695  190637 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1031 17:20:07.629802  190637 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1031 17:20:07.629882  190637 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1031 17:20:07.629968  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1031 17:20:07.630072  190637 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1031 17:20:07.630189  190637 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1031 17:20:07.631485  190637 kubeadm.go:317] W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:20:07.631679  190637 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:20:07.631787  190637 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:20:07.631870  190637 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1031 17:20:07.631928  190637 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1031 17:20:07.632000  190637 kubeadm.go:398] StartCluster complete in 8m6.1386072s
	I1031 17:20:07.632058  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1031 17:20:07.632155  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 17:20:07.657524  190637 cri.go:87] found id: ""
	I1031 17:20:07.657554  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.657563  190637 logs.go:276] No container was found matching "kube-apiserver"
	I1031 17:20:07.657572  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1031 17:20:07.657628  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 17:20:07.683067  190637 cri.go:87] found id: ""
	I1031 17:20:07.683098  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.683108  190637 logs.go:276] No container was found matching "etcd"
	I1031 17:20:07.683117  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1031 17:20:07.683165  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 17:20:07.708474  190637 cri.go:87] found id: ""
	I1031 17:20:07.708498  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.708503  190637 logs.go:276] No container was found matching "coredns"
	I1031 17:20:07.708509  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1031 17:20:07.708553  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 17:20:07.733306  190637 cri.go:87] found id: ""
	I1031 17:20:07.733332  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.733341  190637 logs.go:276] No container was found matching "kube-scheduler"
	I1031 17:20:07.733349  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1031 17:20:07.733399  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 17:20:07.759842  190637 cri.go:87] found id: ""
	I1031 17:20:07.759870  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.759882  190637 logs.go:276] No container was found matching "kube-proxy"
	I1031 17:20:07.759888  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1031 17:20:07.759930  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1031 17:20:07.784929  190637 cri.go:87] found id: ""
	I1031 17:20:07.784958  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.784965  190637 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1031 17:20:07.784970  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1031 17:20:07.785012  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 17:20:07.810755  190637 cri.go:87] found id: ""
	I1031 17:20:07.810785  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.810794  190637 logs.go:276] No container was found matching "storage-provisioner"
	I1031 17:20:07.810801  190637 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 17:20:07.810865  190637 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 17:20:07.835591  190637 cri.go:87] found id: ""
	I1031 17:20:07.835618  190637 logs.go:274] 0 containers: []
	W1031 17:20:07.835626  190637 logs.go:276] No container was found matching "kube-controller-manager"
	I1031 17:20:07.835636  190637 logs.go:123] Gathering logs for kubelet ...
	I1031 17:20:07.835649  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 17:20:07.853867  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:17 kubernetes-upgrade-171032 kubelet[12551]: E1031 17:19:17.859999   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854235  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:18 kubernetes-upgrade-171032 kubelet[12563]: E1031 17:19:18.615680   12563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854617  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:19 kubernetes-upgrade-171032 kubelet[12573]: E1031 17:19:19.358228   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.854968  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:20 kubernetes-upgrade-171032 kubelet[12584]: E1031 17:19:20.119095   12584 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.855310  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:20 kubernetes-upgrade-171032 kubelet[12595]: E1031 17:19:20.859438   12595 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.855654  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:21 kubernetes-upgrade-171032 kubelet[12607]: E1031 17:19:21.609296   12607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856000  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:22 kubernetes-upgrade-171032 kubelet[12618]: E1031 17:19:22.362960   12618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856397  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:23 kubernetes-upgrade-171032 kubelet[12629]: E1031 17:19:23.109818   12629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.856822  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:23 kubernetes-upgrade-171032 kubelet[12640]: E1031 17:19:23.859026   12640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.857201  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:24 kubernetes-upgrade-171032 kubelet[12651]: E1031 17:19:24.608686   12651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.857575  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:25 kubernetes-upgrade-171032 kubelet[12662]: E1031 17:19:25.358423   12662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858072  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:26 kubernetes-upgrade-171032 kubelet[12673]: E1031 17:19:26.109537   12673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858511  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:26 kubernetes-upgrade-171032 kubelet[12684]: E1031 17:19:26.857818   12684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.858879  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:27 kubernetes-upgrade-171032 kubelet[12694]: E1031 17:19:27.616523   12694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.859236  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:28 kubernetes-upgrade-171032 kubelet[12704]: E1031 17:19:28.359030   12704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.859630  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:29 kubernetes-upgrade-171032 kubelet[12715]: E1031 17:19:29.107927   12715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860011  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:29 kubernetes-upgrade-171032 kubelet[12726]: E1031 17:19:29.856931   12726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860406  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:30 kubernetes-upgrade-171032 kubelet[12737]: E1031 17:19:30.608904   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.860778  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:31 kubernetes-upgrade-171032 kubelet[12749]: E1031 17:19:31.359475   12749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861133  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:32 kubernetes-upgrade-171032 kubelet[12760]: E1031 17:19:32.107531   12760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861482  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:32 kubernetes-upgrade-171032 kubelet[12771]: E1031 17:19:32.861374   12771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.861872  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:33 kubernetes-upgrade-171032 kubelet[12782]: E1031 17:19:33.605282   12782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862228  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:34 kubernetes-upgrade-171032 kubelet[12794]: E1031 17:19:34.370314   12794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862576  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:35 kubernetes-upgrade-171032 kubelet[12805]: E1031 17:19:35.120671   12805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.862954  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:35 kubernetes-upgrade-171032 kubelet[12816]: E1031 17:19:35.858702   12816 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.863316  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:36 kubernetes-upgrade-171032 kubelet[12829]: E1031 17:19:36.613591   12829 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.863679  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:37 kubernetes-upgrade-171032 kubelet[12839]: E1031 17:19:37.356549   12839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864057  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:38 kubernetes-upgrade-171032 kubelet[12850]: E1031 17:19:38.107918   12850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864450  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:38 kubernetes-upgrade-171032 kubelet[12861]: E1031 17:19:38.861447   12861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.864806  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:39 kubernetes-upgrade-171032 kubelet[12872]: E1031 17:19:39.609535   12872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.865220  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:40 kubernetes-upgrade-171032 kubelet[12882]: E1031 17:19:40.358351   12882 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.865649  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:41 kubernetes-upgrade-171032 kubelet[12894]: E1031 17:19:41.106147   12894 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.866058  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:41 kubernetes-upgrade-171032 kubelet[12905]: E1031 17:19:41.870668   12905 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.866643  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:42 kubernetes-upgrade-171032 kubelet[12916]: E1031 17:19:42.606832   12916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.867233  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:43 kubernetes-upgrade-171032 kubelet[12927]: E1031 17:19:43.366210   12927 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.867767  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:44 kubernetes-upgrade-171032 kubelet[12938]: E1031 17:19:44.121244   12938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.868355  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:44 kubernetes-upgrade-171032 kubelet[12949]: E1031 17:19:44.915125   12949 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.868936  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:45 kubernetes-upgrade-171032 kubelet[12960]: E1031 17:19:45.614521   12960 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.869432  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:46 kubernetes-upgrade-171032 kubelet[12971]: E1031 17:19:46.357202   12971 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.870090  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:47 kubernetes-upgrade-171032 kubelet[12981]: E1031 17:19:47.110188   12981 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.870680  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:47 kubernetes-upgrade-171032 kubelet[12992]: E1031 17:19:47.865319   12992 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.871258  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:48 kubernetes-upgrade-171032 kubelet[13003]: E1031 17:19:48.610783   13003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.871841  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:49 kubernetes-upgrade-171032 kubelet[13014]: E1031 17:19:49.364535   13014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.872450  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:50 kubernetes-upgrade-171032 kubelet[13025]: E1031 17:19:50.118765   13025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.873030  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:50 kubernetes-upgrade-171032 kubelet[13035]: E1031 17:19:50.870666   13035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.873619  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:51 kubernetes-upgrade-171032 kubelet[13046]: E1031 17:19:51.611040   13046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.874116  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:52 kubernetes-upgrade-171032 kubelet[13058]: E1031 17:19:52.364373   13058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.874761  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:53 kubernetes-upgrade-171032 kubelet[13069]: E1031 17:19:53.118969   13069 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.875307  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:53 kubernetes-upgrade-171032 kubelet[13080]: E1031 17:19:53.864484   13080 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.875953  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:54 kubernetes-upgrade-171032 kubelet[13091]: E1031 17:19:54.624050   13091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.876555  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:55 kubernetes-upgrade-171032 kubelet[13102]: E1031 17:19:55.393244   13102 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.877142  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:56 kubernetes-upgrade-171032 kubelet[13112]: E1031 17:19:56.120404   13112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.877730  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:56 kubernetes-upgrade-171032 kubelet[13123]: E1031 17:19:56.857219   13123 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.878300  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:57 kubernetes-upgrade-171032 kubelet[13133]: E1031 17:19:57.617821   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.878711  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:58 kubernetes-upgrade-171032 kubelet[13144]: E1031 17:19:58.360514   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879065  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:59 kubernetes-upgrade-171032 kubelet[13155]: E1031 17:19:59.118905   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879415  190637 logs.go:138] Found kubelet problem: Oct 31 17:19:59 kubernetes-upgrade-171032 kubelet[13166]: E1031 17:19:59.857691   13166 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.879756  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:00 kubernetes-upgrade-171032 kubelet[13176]: E1031 17:20:00.611772   13176 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880126  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:01 kubernetes-upgrade-171032 kubelet[13186]: E1031 17:20:01.358451   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880481  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:02 kubernetes-upgrade-171032 kubelet[13196]: E1031 17:20:02.114080   13196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.880822  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:02 kubernetes-upgrade-171032 kubelet[13206]: E1031 17:20:02.917959   13206 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881167  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:03 kubernetes-upgrade-171032 kubelet[13217]: E1031 17:20:03.619555   13217 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881509  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:04 kubernetes-upgrade-171032 kubelet[13227]: E1031 17:20:04.356232   13227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.881847  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:05 kubernetes-upgrade-171032 kubelet[13238]: E1031 17:20:05.109111   13238 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882186  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:05 kubernetes-upgrade-171032 kubelet[13250]: E1031 17:20:05.857299   13250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882538  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:06 kubernetes-upgrade-171032 kubelet[13261]: E1031 17:20:06.608161   13261 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1031 17:20:07.882881  190637 logs.go:138] Found kubelet problem: Oct 31 17:20:07 kubernetes-upgrade-171032 kubelet[13272]: E1031 17:20:07.357203   13272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:07.883015  190637 logs.go:123] Gathering logs for dmesg ...
	I1031 17:20:07.883032  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 17:20:07.902215  190637 logs.go:123] Gathering logs for describe nodes ...
	I1031 17:20:07.902267  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1031 17:20:07.959431  190637 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1031 17:20:07.959458  190637 logs.go:123] Gathering logs for containerd ...
	I1031 17:20:07.959470  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1031 17:20:08.022467  190637 logs.go:123] Gathering logs for container status ...
	I1031 17:20:08.022507  190637 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1031 17:20:08.053446  190637 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1031 17:20:08.053495  190637 out.go:239] * 
	W1031 17:20:08.053712  190637 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:20:08.053747  190637 out.go:239] * 
	W1031 17:20:08.054559  190637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:20:08.056952  190637 out.go:177] X Problems detected in kubelet:
	I1031 17:20:08.059276  190637 out.go:177]   Oct 31 17:19:17 kubernetes-upgrade-171032 kubelet[12551]: E1031 17:19:17.859999   12551 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.060863  190637 out.go:177]   Oct 31 17:19:18 kubernetes-upgrade-171032 kubelet[12563]: E1031 17:19:18.615680   12563 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.062503  190637 out.go:177]   Oct 31 17:19:19 kubernetes-upgrade-171032 kubelet[12573]: E1031 17:19:19.358228   12573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1031 17:20:08.066358  190637 out.go:177] 
	W1031 17:20:08.068100  190637 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1031 17:18:11.562004   11454 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1031 17:20:08.068251  190637 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1031 17:20:08.068324  190637 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1031 17:20:08.070131  190637 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2022-10-31 17:11:26 UTC, end at Mon 2022-10-31 17:20:09 UTC. --
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.294945519Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.312547890Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.312610166Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.328758861Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.328811337Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.345938585Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.345987444Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.364911914Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.364967079Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.382479580Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.382533912Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.399371370Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.399435593Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.417140747Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.417209129Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.434134821Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.434202047Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.451826341Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.451883812Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.468714538Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.468766933Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.485548021Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.485596016Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.502341056Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Oct 31 17:18:11 kubernetes-upgrade-171032 containerd[499]: time="2022-10-31T17:18:11.502394037Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +1.951797] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000008] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.000036] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000006] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.067954] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000006] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +4.187587] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000005] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.003986] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000005] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +8.187197] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000005] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[  +0.003994] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-75b9f201fef4
	[  +0.000006] ll header: 00000000: 02 42 3a f2 e2 6f 02 42 c0 a8 5e 02 08 00
	[Oct31 17:20] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ca0a2ba0891f
	[  +0.000008] ll header: 00000000: 02 42 49 85 89 57 02 42 c0 a8 43 02 08 00
	[  +1.007420] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ca0a2ba0891f
	[  +0.000005] ll header: 00000000: 02 42 49 85 89 57 02 42 c0 a8 43 02 08 00
	[  +2.011826] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ca0a2ba0891f
	[  +0.000008] ll header: 00000000: 02 42 49 85 89 57 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> kernel <==
	*  17:20:09 up  1:02,  0 users,  load average: 2.01, 1.86, 1.64
	Linux kubernetes-upgrade-171032 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-10-31 17:11:26 UTC, end at Mon 2022-10-31 17:20:09 UTC. --
	Oct 31 17:20:06 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 31 17:20:07 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Oct 31 17:20:07 kubernetes-upgrade-171032 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:07 kubernetes-upgrade-171032 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:07 kubernetes-upgrade-171032 kubelet[13272]: E1031 17:20:07.357203   13272 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Oct 31 17:20:07 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Oct 31 17:20:07 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:08 kubernetes-upgrade-171032 kubelet[13419]: E1031 17:20:08.118655   13419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:08 kubernetes-upgrade-171032 kubelet[13438]: E1031 17:20:08.868768   13438 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Oct 31 17:20:08 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 31 17:20:09 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Oct 31 17:20:09 kubernetes-upgrade-171032 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:09 kubernetes-upgrade-171032 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 31 17:20:09 kubernetes-upgrade-171032 kubelet[13570]: E1031 17:20:09.614714   13570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Oct 31 17:20:09 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Oct 31 17:20:09 kubernetes-upgrade-171032 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 17:20:09.652053  241590 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171032 -n kubernetes-upgrade-171032
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171032 -n kubernetes-upgrade-171032: exit status 2 (378.578186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-171032" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-171032: (2.101833088s)
--- FAIL: TestKubernetesUpgrade (579.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (514.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-171018 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-171018 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m34.923445964s)

                                                
                                                
-- stdout --
	* [calico-171018] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-171018 in cluster calico-171018
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:22:17.526665  268320 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:22:17.526790  268320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:22:17.526801  268320 out.go:309] Setting ErrFile to fd 2...
	I1031 17:22:17.526806  268320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:22:17.526915  268320 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:22:17.527500  268320 out.go:303] Setting JSON to false
	I1031 17:22:17.529444  268320 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3887,"bootTime":1667233050,"procs":1117,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:22:17.529517  268320 start.go:126] virtualization: kvm guest
	I1031 17:22:17.532703  268320 out.go:177] * [calico-171018] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:22:17.534440  268320 notify.go:220] Checking for updates...
	I1031 17:22:17.536123  268320 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:22:17.537991  268320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:22:17.539814  268320 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:22:17.542038  268320 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:22:17.543713  268320 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:22:17.545775  268320 config.go:180] Loaded profile config "cilium-171018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:22:17.545885  268320 config.go:180] Loaded profile config "default-k8s-diff-port-171820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:22:17.545966  268320 config.go:180] Loaded profile config "kindnet-171017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:22:17.546021  268320 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:22:17.578891  268320 docker.go:137] docker version: linux-20.10.21
	I1031 17:22:17.579007  268320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:22:17.690179  268320 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-10-31 17:22:17.601464819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:22:17.690285  268320 docker.go:254] overlay module found
	I1031 17:22:17.692840  268320 out.go:177] * Using the docker driver based on user configuration
	I1031 17:22:17.694375  268320 start.go:282] selected driver: docker
	I1031 17:22:17.694403  268320 start.go:808] validating driver "docker" against <nil>
	I1031 17:22:17.694427  268320 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:22:17.695419  268320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:22:17.801881  268320 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-10-31 17:22:17.717027277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:22:17.802003  268320 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1031 17:22:17.802227  268320 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:22:17.804531  268320 out.go:177] * Using Docker driver with root privileges
	I1031 17:22:17.806143  268320 cni.go:95] Creating CNI manager for "calico"
	I1031 17:22:17.806158  268320 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1031 17:22:17.806173  268320 start_flags.go:317] config:
	{Name:calico-171018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171018 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:22:17.808120  268320 out.go:177] * Starting control plane node calico-171018 in cluster calico-171018
	I1031 17:22:17.809176  268320 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 17:22:17.810645  268320 out.go:177] * Pulling base image ...
	I1031 17:22:17.812133  268320 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:22:17.812182  268320 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 17:22:17.812192  268320 cache.go:57] Caching tarball of preloaded images
	I1031 17:22:17.812220  268320 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 17:22:17.812468  268320 preload.go:174] Found /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:22:17.812486  268320 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1031 17:22:17.812582  268320 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/config.json ...
	I1031 17:22:17.812607  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/config.json: {Name:mkfaff52867b70550cedb7d788ac00ed3ed21b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:17.838672  268320 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1031 17:22:17.838700  268320 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1031 17:22:17.838716  268320 cache.go:208] Successfully downloaded all kic artifacts
	I1031 17:22:17.838750  268320 start.go:364] acquiring machines lock for calico-171018: {Name:mkf13e8afb102283d0f468fd41eda5863fe08aba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:22:17.838907  268320 start.go:368] acquired machines lock for "calico-171018" in 136.006µs
	I1031 17:22:17.838939  268320 start.go:93] Provisioning new machine with config: &{Name:calico-171018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171018 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1031 17:22:17.839036  268320 start.go:125] createHost starting for "" (driver="docker")
	I1031 17:22:17.841766  268320 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1031 17:22:17.842004  268320 start.go:159] libmachine.API.Create for "calico-171018" (driver="docker")
	I1031 17:22:17.842031  268320 client.go:168] LocalClient.Create starting
	I1031 17:22:17.842092  268320 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem
	I1031 17:22:17.842125  268320 main.go:134] libmachine: Decoding PEM data...
	I1031 17:22:17.842141  268320 main.go:134] libmachine: Parsing certificate...
	I1031 17:22:17.842190  268320 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem
	I1031 17:22:17.842207  268320 main.go:134] libmachine: Decoding PEM data...
	I1031 17:22:17.842216  268320 main.go:134] libmachine: Parsing certificate...
	I1031 17:22:17.842529  268320 cli_runner.go:164] Run: docker network inspect calico-171018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1031 17:22:17.866928  268320 cli_runner.go:211] docker network inspect calico-171018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1031 17:22:17.867000  268320 network_create.go:272] running [docker network inspect calico-171018] to gather additional debugging logs...
	I1031 17:22:17.867017  268320 cli_runner.go:164] Run: docker network inspect calico-171018
	W1031 17:22:17.891069  268320 cli_runner.go:211] docker network inspect calico-171018 returned with exit code 1
	I1031 17:22:17.891102  268320 network_create.go:275] error running [docker network inspect calico-171018]: docker network inspect calico-171018: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-171018
	I1031 17:22:17.891114  268320 network_create.go:277] output of [docker network inspect calico-171018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-171018
	
	** /stderr **
	I1031 17:22:17.891179  268320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:22:17.918665  268320 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-1118cbd038c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c5:1c:9e:f9}}
	I1031 17:22:17.919562  268320 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-690308a47517 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c2:f8:41:7a}}
	I1031 17:22:17.920223  268320 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-ca0a2ba0891f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:49:85:89:57}}
	I1031 17:22:17.920909  268320 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-eb25a6971ff7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:56:a7:31:3c}}
	I1031 17:22:17.921656  268320 network.go:246] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-5308ad94db5d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:33:2d:97:f0}}
	I1031 17:22:17.922671  268320 network.go:295] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc0009681d8] misses:0}
	I1031 17:22:17.922714  268320 network.go:241] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1031 17:22:17.922725  268320 network_create.go:115] attempt to create docker network calico-171018 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1031 17:22:17.922785  268320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-171018 calico-171018
	I1031 17:22:17.985515  268320 network_create.go:99] docker network calico-171018 192.168.94.0/24 created
	I1031 17:22:17.985548  268320 kic.go:106] calculated static IP "192.168.94.2" for the "calico-171018" container
	I1031 17:22:17.985620  268320 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1031 17:22:18.011419  268320 cli_runner.go:164] Run: docker volume create calico-171018 --label name.minikube.sigs.k8s.io=calico-171018 --label created_by.minikube.sigs.k8s.io=true
	I1031 17:22:18.037011  268320 oci.go:103] Successfully created a docker volume calico-171018
	I1031 17:22:18.037087  268320 cli_runner.go:164] Run: docker run --rm --name calico-171018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171018 --entrypoint /usr/bin/test -v calico-171018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1031 17:22:18.784521  268320 oci.go:107] Successfully prepared a docker volume calico-171018
	I1031 17:22:18.784556  268320 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:22:18.784575  268320 kic.go:179] Starting extracting preloaded images to volume ...
	I1031 17:22:18.784669  268320 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1031 17:22:22.551794  268320 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.767037247s)
	I1031 17:22:22.551829  268320 kic.go:188] duration metric: took 3.767251 seconds to extract preloaded images to volume
	W1031 17:22:22.570103  268320 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1031 17:22:22.570530  268320 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1031 17:22:22.678144  268320 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-171018 --name calico-171018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-171018 --network calico-171018 --ip 192.168.94.2 --volume calico-171018:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1031 17:22:23.406870  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Running}}
	I1031 17:22:23.437483  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:23.468816  268320 cli_runner.go:164] Run: docker exec calico-171018 stat /var/lib/dpkg/alternatives/iptables
	I1031 17:22:23.542494  268320 oci.go:144] the created container "calico-171018" has a running status.
	I1031 17:22:23.542528  268320 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa...
	I1031 17:22:23.787709  268320 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1031 17:22:23.875638  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:23.906367  268320 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1031 17:22:23.906395  268320 kic_runner.go:114] Args: [docker exec --privileged calico-171018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1031 17:22:23.997863  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:24.029381  268320 machine.go:88] provisioning docker machine ...
	I1031 17:22:24.029427  268320 ubuntu.go:169] provisioning hostname "calico-171018"
	I1031 17:22:24.029489  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:24.061508  268320 main.go:134] libmachine: Using SSH client type: native
	I1031 17:22:24.061708  268320 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1031 17:22:24.061733  268320 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-171018 && echo "calico-171018" | sudo tee /etc/hostname
	I1031 17:22:24.197843  268320 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-171018
	
	I1031 17:22:24.197928  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:24.231567  268320 main.go:134] libmachine: Using SSH client type: native
	I1031 17:22:24.231703  268320 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1031 17:22:24.231722  268320 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-171018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-171018/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-171018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:22:24.352226  268320 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:22:24.352263  268320 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
	I1031 17:22:24.352297  268320 ubuntu.go:177] setting up certificates
	I1031 17:22:24.352311  268320 provision.go:83] configureAuth start
	I1031 17:22:24.352370  268320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171018
	I1031 17:22:24.379041  268320 provision.go:138] copyHostCerts
	I1031 17:22:24.379101  268320 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
	I1031 17:22:24.379112  268320 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
	I1031 17:22:24.379181  268320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
	I1031 17:22:24.379252  268320 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
	I1031 17:22:24.379262  268320 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
	I1031 17:22:24.379288  268320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
	I1031 17:22:24.379374  268320 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
	I1031 17:22:24.379386  268320 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
	I1031 17:22:24.379410  268320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
	I1031 17:22:24.379452  268320 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.calico-171018 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube calico-171018]
	I1031 17:22:24.455059  268320 provision.go:172] copyRemoteCerts
	I1031 17:22:24.455130  268320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:22:24.455178  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:24.484971  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:24.577225  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 17:22:24.598101  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1031 17:22:24.661809  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:22:24.707181  268320 provision.go:86] duration metric: configureAuth took 354.855317ms
	I1031 17:22:24.707219  268320 ubuntu.go:193] setting minikube options for container-runtime
	I1031 17:22:24.707434  268320 config.go:180] Loaded profile config "calico-171018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:22:24.707454  268320 machine.go:91] provisioned docker machine in 678.047389ms
	I1031 17:22:24.707465  268320 client.go:171] LocalClient.Create took 6.865429227s
	I1031 17:22:24.707489  268320 start.go:167] duration metric: libmachine.API.Create for "calico-171018" took 6.865484474s
	I1031 17:22:24.707500  268320 start.go:300] post-start starting for "calico-171018" (driver="docker")
	I1031 17:22:24.707509  268320 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:22:24.707567  268320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:22:24.707628  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:24.742134  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:24.838226  268320 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:22:24.842228  268320 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1031 17:22:24.842262  268320 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1031 17:22:24.842280  268320 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1031 17:22:24.842290  268320 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1031 17:22:24.842302  268320 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
	I1031 17:22:24.842363  268320 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
	I1031 17:22:24.842490  268320 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
	I1031 17:22:24.842600  268320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:22:24.850947  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:22:24.871646  268320 start.go:303] post-start completed in 164.131404ms
	I1031 17:22:24.871994  268320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171018
	I1031 17:22:24.901287  268320 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/config.json ...
	I1031 17:22:24.901581  268320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 17:22:24.901631  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:24.929446  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:25.017416  268320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1031 17:22:25.022452  268320 start.go:128] duration metric: createHost completed in 7.183395303s
	I1031 17:22:25.022478  268320 start.go:83] releasing machines lock for "calico-171018", held for 7.183552868s
	I1031 17:22:25.022573  268320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171018
	I1031 17:22:25.051260  268320 ssh_runner.go:195] Run: systemctl --version
	I1031 17:22:25.051317  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:25.051359  268320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:22:25.051437  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:25.085681  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:25.085968  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:25.209399  268320 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:22:25.221775  268320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:22:25.233913  268320 docker.go:189] disabling docker service ...
	I1031 17:22:25.233966  268320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 17:22:25.254433  268320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 17:22:25.265059  268320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 17:22:25.379231  268320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 17:22:25.467875  268320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 17:22:25.478133  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:22:25.494678  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1031 17:22:25.505619  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 17:22:25.515227  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 17:22:25.524813  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I1031 17:22:25.534631  268320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:22:25.542468  268320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:22:25.550061  268320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:22:25.634859  268320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:22:25.724554  268320 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 17:22:25.724630  268320 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 17:22:25.729892  268320 start.go:472] Will wait 60s for crictl version
	I1031 17:22:25.729959  268320 ssh_runner.go:195] Run: sudo crictl version
	I1031 17:22:25.762780  268320 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1031 17:22:25.762846  268320 ssh_runner.go:195] Run: containerd --version
	I1031 17:22:25.792722  268320 ssh_runner.go:195] Run: containerd --version
	I1031 17:22:25.823708  268320 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1031 17:22:25.825362  268320 cli_runner.go:164] Run: docker network inspect calico-171018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1031 17:22:25.850242  268320 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1031 17:22:25.853757  268320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:22:25.863927  268320 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 17:22:25.863988  268320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:22:25.889807  268320 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 17:22:25.889840  268320 containerd.go:467] Images already preloaded, skipping extraction
	I1031 17:22:25.889892  268320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 17:22:25.916016  268320 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 17:22:25.916041  268320 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:22:25.916108  268320 ssh_runner.go:195] Run: sudo crictl info
	I1031 17:22:25.950664  268320 cni.go:95] Creating CNI manager for "calico"
	I1031 17:22:25.950699  268320 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:22:25.950715  268320 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-171018 NodeName:calico-171018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 17:22:25.950912  268320 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-171018"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:22:25.951049  268320 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-171018 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-171018 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1031 17:22:25.951120  268320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1031 17:22:25.959166  268320 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:22:25.959250  268320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:22:25.967182  268320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I1031 17:22:25.981422  268320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:22:25.995595  268320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I1031 17:22:26.010200  268320 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1031 17:22:26.013846  268320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:22:26.024051  268320 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018 for IP: 192.168.94.2
	I1031 17:22:26.024207  268320 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
	I1031 17:22:26.024248  268320 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
	I1031 17:22:26.024313  268320 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.key
	I1031 17:22:26.024332  268320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.crt with IP's: []
	I1031 17:22:26.323408  268320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.crt ...
	I1031 17:22:26.323455  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.crt: {Name:mkd20931c849eae8911ffaa914f3a995a6477ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.323760  268320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.key ...
	I1031 17:22:26.323787  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/client.key: {Name:mk8383f4b4b71779be5cdb4f768e3adf9ee20de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.323953  268320 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key.ad8e880a
	I1031 17:22:26.323979  268320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:22:26.654096  268320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt.ad8e880a ...
	I1031 17:22:26.654133  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt.ad8e880a: {Name:mk0b773cd129325f5f987529280fb17d36b68ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.654355  268320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key.ad8e880a ...
	I1031 17:22:26.654374  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key.ad8e880a: {Name:mk02921178b0335d49e7d0cb530b81a5de186599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.654485  268320 certs.go:320] copying /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt
	I1031 17:22:26.654548  268320 certs.go:324] copying /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key
	I1031 17:22:26.654592  268320 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.key
	I1031 17:22:26.654605  268320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.crt with IP's: []
	I1031 17:22:26.966205  268320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.crt ...
	I1031 17:22:26.966235  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.crt: {Name:mk58ae12768185317fac8731d20a0fa8c10ebb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.966416  268320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.key ...
	I1031 17:22:26.966430  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.key: {Name:mk2cd4b7757b8d7538bd5025408de56dba5a3964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:26.966587  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
	W1031 17:22:26.966627  268320 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
	I1031 17:22:26.966640  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:22:26.966664  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
	I1031 17:22:26.966685  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:22:26.966717  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
	I1031 17:22:26.966780  268320 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
	I1031 17:22:26.967375  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:22:26.987437  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 17:22:27.006933  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:22:27.027904  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/calico-171018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 17:22:27.050668  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:22:27.069852  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:22:27.089703  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:22:27.110631  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:22:27.132636  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:22:27.157698  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
	I1031 17:22:27.181996  268320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
	I1031 17:22:27.204576  268320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 17:22:27.218422  268320 ssh_runner.go:195] Run: openssl version
	I1031 17:22:27.223571  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:22:27.233386  268320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:22:27.236880  268320 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:22:27.236949  268320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:22:27.242564  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:22:27.252621  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
	I1031 17:22:27.269486  268320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
	I1031 17:22:27.274712  268320 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
	I1031 17:22:27.274779  268320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
	I1031 17:22:27.281992  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
	I1031 17:22:27.290850  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
	I1031 17:22:27.299114  268320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
	I1031 17:22:27.302472  268320 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
	I1031 17:22:27.302537  268320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
	I1031 17:22:27.307695  268320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:22:27.315707  268320 kubeadm.go:396] StartCluster: {Name:calico-171018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171018 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:22:27.315828  268320 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 17:22:27.315879  268320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 17:22:27.342131  268320 cri.go:87] found id: ""
	I1031 17:22:27.342212  268320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:22:27.354362  268320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:22:27.364656  268320 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1031 17:22:27.364722  268320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:22:27.377810  268320 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:22:27.377876  268320 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1031 17:22:27.428647  268320 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1031 17:22:27.428725  268320 kubeadm.go:317] [preflight] Running pre-flight checks
	I1031 17:22:27.461359  268320 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1031 17:22:27.461445  268320 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1031 17:22:27.461505  268320 kubeadm.go:317] OS: Linux
	I1031 17:22:27.461569  268320 kubeadm.go:317] CGROUPS_CPU: enabled
	I1031 17:22:27.461632  268320 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1031 17:22:27.461686  268320 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1031 17:22:27.461765  268320 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1031 17:22:27.461830  268320 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1031 17:22:27.461902  268320 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1031 17:22:27.462051  268320 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1031 17:22:27.462130  268320 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1031 17:22:27.462192  268320 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1031 17:22:27.550264  268320 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:22:27.550402  268320 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:22:27.550511  268320 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:22:27.709651  268320 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:22:27.712135  268320 out.go:204]   - Generating certificates and keys ...
	I1031 17:22:27.712305  268320 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1031 17:22:27.712453  268320 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1031 17:22:27.820632  268320 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:22:27.903936  268320 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:22:28.093316  268320 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:22:28.273692  268320 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1031 17:22:28.375072  268320 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1031 17:22:28.375276  268320 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-171018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1031 17:22:28.489246  268320 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1031 17:22:28.489459  268320 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-171018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1031 17:22:28.689419  268320 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:22:28.886781  268320 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:22:29.290403  268320 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1031 17:22:29.290526  268320 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:22:29.788592  268320 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:22:29.912720  268320 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:22:30.084982  268320 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:22:30.263136  268320 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:22:30.275751  268320 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:22:30.279761  268320 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:22:30.279851  268320 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1031 17:22:30.379272  268320 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:22:30.515502  268320 out.go:204]   - Booting up control plane ...
	I1031 17:22:30.515732  268320 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:22:30.515843  268320 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:22:30.515926  268320 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:22:30.516045  268320 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:22:30.516270  268320 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:22:37.391699  268320 kubeadm.go:317] [apiclient] All control plane components are healthy after 7.003208 seconds
	I1031 17:22:37.391867  268320 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:22:37.405132  268320 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:22:37.922350  268320 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:22:37.922519  268320 kubeadm.go:317] [mark-control-plane] Marking the node calico-171018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:22:38.430946  268320 kubeadm.go:317] [bootstrap-token] Using token: zue9ij.6w5ueehgbk510tcj
	I1031 17:22:38.432754  268320 out.go:204]   - Configuring RBAC rules ...
	I1031 17:22:38.432937  268320 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:22:38.435742  268320 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:22:38.440434  268320 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:22:38.442339  268320 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:22:38.444341  268320 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:22:38.446352  268320 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:22:38.455198  268320 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:22:38.665577  268320 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1031 17:22:38.851024  268320 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1031 17:22:38.853504  268320 kubeadm.go:317] 
	I1031 17:22:38.853599  268320 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1031 17:22:38.853613  268320 kubeadm.go:317] 
	I1031 17:22:38.853706  268320 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1031 17:22:38.853717  268320 kubeadm.go:317] 
	I1031 17:22:38.853749  268320 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1031 17:22:38.853820  268320 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:22:38.853882  268320 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:22:38.853889  268320 kubeadm.go:317] 
	I1031 17:22:38.853954  268320 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1031 17:22:38.853961  268320 kubeadm.go:317] 
	I1031 17:22:38.854018  268320 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:22:38.854024  268320 kubeadm.go:317] 
	I1031 17:22:38.854087  268320 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1031 17:22:38.854179  268320 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:22:38.854262  268320 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:22:38.854310  268320 kubeadm.go:317] 
	I1031 17:22:38.854419  268320 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:22:38.854512  268320 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1031 17:22:38.854519  268320 kubeadm.go:317] 
	I1031 17:22:38.854619  268320 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token zue9ij.6w5ueehgbk510tcj \
	I1031 17:22:38.854758  268320 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:7d0aadaa8a3870a51ce9e26f0eed9d44b7f2ed877e3d3686c1873abceaa77688 \
	I1031 17:22:38.854784  268320 kubeadm.go:317] 	--control-plane 
	I1031 17:22:38.854790  268320 kubeadm.go:317] 
	I1031 17:22:38.854887  268320 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:22:38.854894  268320 kubeadm.go:317] 
	I1031 17:22:38.855001  268320 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token zue9ij.6w5ueehgbk510tcj \
	I1031 17:22:38.855127  268320 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:7d0aadaa8a3870a51ce9e26f0eed9d44b7f2ed877e3d3686c1873abceaa77688 
	I1031 17:22:38.857387  268320 kubeadm.go:317] W1031 17:22:27.418725     733 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1031 17:22:38.857709  268320 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1031 17:22:38.857893  268320 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:22:38.857927  268320 cni.go:95] Creating CNI manager for "calico"
	I1031 17:22:38.860005  268320 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1031 17:22:38.861977  268320 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1031 17:22:38.862003  268320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1031 17:22:38.885588  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:22:40.793670  268320 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.908025071s)
	I1031 17:22:40.793721  268320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:22:40.793816  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=2e5adf9ee40d3190a65d3fa843a253d73ae4fdf3 minikube.k8s.io/name=calico-171018 minikube.k8s.io/updated_at=2022_10_31T17_22_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:40.793817  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:40.801880  268320 ops.go:34] apiserver oom_adj: -16
	I1031 17:22:40.890716  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:41.490161  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:41.990186  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:42.489570  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:42.990449  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:43.490142  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:43.989666  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:44.490067  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:44.990055  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:45.490116  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:45.989765  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:46.489679  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:46.989777  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:47.490215  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:47.990141  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:48.490117  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:48.989624  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:49.490157  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:49.990177  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:50.490432  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:50.989687  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:51.490365  268320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:22:51.624885  268320 kubeadm.go:1067] duration metric: took 10.831129458s to wait for elevateKubeSystemPrivileges.
	I1031 17:22:51.624937  268320 kubeadm.go:398] StartCluster complete in 24.30923883s
	I1031 17:22:51.624958  268320 settings.go:142] acquiring lock: {Name:mk815a86086a5a2f83362177da735ab9253065a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:51.625079  268320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:22:51.626672  268320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/kubeconfig: {Name:mkbe3dcb9ce3e3942a7be44b5e867e137f1872a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:22:52.143861  268320 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-171018" rescaled to 1
	I1031 17:22:52.143915  268320 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1031 17:22:52.147529  268320 out.go:177] * Verifying Kubernetes components...
	I1031 17:22:52.144034  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:22:52.144035  268320 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1031 17:22:52.144273  268320 config.go:180] Loaded profile config "calico-171018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:22:52.149423  268320 addons.go:65] Setting storage-provisioner=true in profile "calico-171018"
	I1031 17:22:52.149444  268320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:22:52.149456  268320 addons.go:65] Setting default-storageclass=true in profile "calico-171018"
	I1031 17:22:52.149472  268320 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-171018"
	I1031 17:22:52.149446  268320 addons.go:153] Setting addon storage-provisioner=true in "calico-171018"
	W1031 17:22:52.149503  268320 addons.go:162] addon storage-provisioner should already be in state true
	I1031 17:22:52.149561  268320 host.go:66] Checking if "calico-171018" exists ...
	I1031 17:22:52.149832  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:52.149970  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:52.208516  268320 addons.go:153] Setting addon default-storageclass=true in "calico-171018"
	W1031 17:22:52.208544  268320 addons.go:162] addon default-storageclass should already be in state true
	I1031 17:22:52.208567  268320 host.go:66] Checking if "calico-171018" exists ...
	I1031 17:22:52.208935  268320 cli_runner.go:164] Run: docker container inspect calico-171018 --format={{.State.Status}}
	I1031 17:22:52.213283  268320 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:22:52.214908  268320 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:22:52.214931  268320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:22:52.214991  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:52.245064  268320 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:22:52.245090  268320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:22:52.245141  268320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171018
	I1031 17:22:52.254452  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:52.278075  268320 node_ready.go:35] waiting up to 5m0s for node "calico-171018" to be "Ready" ...
	I1031 17:22:52.278403  268320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:22:52.281922  268320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/calico-171018/id_rsa Username:docker}
	I1031 17:22:52.284913  268320 node_ready.go:49] node "calico-171018" has status "Ready":"True"
	I1031 17:22:52.284933  268320 node_ready.go:38] duration metric: took 6.828605ms waiting for node "calico-171018" to be "Ready" ...
	I1031 17:22:52.284944  268320 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:22:52.300415  268320 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace to be "Ready" ...
	I1031 17:22:52.456315  268320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:22:52.469440  268320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:22:54.053664  268320 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.775195047s)
	I1031 17:22:54.053770  268320 start.go:826] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I1031 17:22:54.148666  268320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.679183816s)
	I1031 17:22:54.148737  268320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.692249147s)
	I1031 17:22:54.151256  268320 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1031 17:22:54.152821  268320 addons.go:414] enableAddons completed in 2.008791s
	I1031 17:22:54.319712  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:22:56.319937  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:22:58.818160  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:01.318968  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:03.818679  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:05.851180  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:08.318927  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:10.818769  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:13.318109  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:15.319474  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:17.818248  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:19.819044  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:22.319611  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:24.818602  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:26.818755  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:28.819322  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:30.819432  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:33.317881  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:35.318349  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:37.818775  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:39.819106  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:41.819542  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:44.319429  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:46.319537  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:48.819031  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:51.319174  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:53.818582  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:55.818712  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:23:57.819138  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:00.318909  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:02.319924  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:04.818753  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:07.319841  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:09.818895  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:11.819107  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:13.819647  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:16.322282  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:18.819085  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:21.318332  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:23.319954  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:25.818107  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:27.819455  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:30.318771  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:32.350848  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:34.819178  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:37.318581  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:39.319035  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:41.319440  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:43.818418  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:46.318489  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:48.319678  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:50.818947  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:52.819056  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:55.319291  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:24:57.819573  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:00.318470  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:02.320185  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:04.818371  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:07.318404  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:09.319044  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:11.818340  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:13.818933  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:15.819708  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:18.318700  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:20.319569  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:22.348521  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:24.818825  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:27.318890  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:29.818264  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:31.818611  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:34.318521  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:36.819020  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:39.319094  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:41.818659  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:44.319521  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:46.819046  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:49.318352  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:51.319319  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:53.818382  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:55.818807  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:57.818916  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:25:59.819209  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:02.319499  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:04.320264  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:06.818380  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:08.818698  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:11.318812  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:13.818490  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:15.819109  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:18.318463  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:20.318688  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:22.319494  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:24.820014  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:27.318527  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:29.319456  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:31.321148  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:33.819260  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:36.319180  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:38.818809  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:41.319078  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:43.818653  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:46.318150  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:48.318360  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:50.818874  268320 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:52.351849  268320 pod_ready.go:81] duration metric: took 4m0.051397191s waiting for pod "calico-kube-controllers-7df895d496-vqgcm" in "kube-system" namespace to be "Ready" ...
	E1031 17:26:52.351872  268320 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1031 17:26:52.351886  268320 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-lbwbw" in "kube-system" namespace to be "Ready" ...
	I1031 17:26:54.363608  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:56.364584  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:26:58.865954  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:01.364800  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:03.864128  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:05.864824  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:08.364496  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:10.364992  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:12.366091  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:14.864402  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:16.864544  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:19.364356  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:21.365872  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:23.865451  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:26.364952  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:28.863778  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:30.864253  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:32.865417  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:35.364275  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:37.865298  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:40.365053  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:42.863517  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:44.864364  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:46.864655  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:49.364686  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:51.864560  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:54.364775  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:56.863858  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:27:58.864896  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:01.363766  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:03.364891  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:05.365033  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:07.865012  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:10.364162  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:12.364974  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:14.865108  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:16.865388  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:19.363754  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:21.364788  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:23.864554  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:25.864972  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:28.364694  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:30.865024  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:33.364470  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:35.864357  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:37.864633  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:39.864684  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:42.363893  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:44.366541  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:46.864144  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:48.864577  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:51.364371  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:53.864702  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:55.865488  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:28:58.364481  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:00.864213  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:03.364029  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:05.364361  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:07.365313  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:09.864255  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:12.364199  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:14.364440  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:16.864492  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:19.364669  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:21.864243  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:23.864501  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:26.364615  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:28.864489  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:30.864821  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:33.363378  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:35.364363  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:37.364397  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:39.364483  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:41.863992  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:43.864335  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:45.864544  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:48.364579  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:50.863705  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:52.864461  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:54.864880  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:57.364202  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:29:59.864102  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:01.869183  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:04.364062  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:06.368122  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:08.863805  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:10.867134  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:13.363895  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:15.864119  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:18.364499  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:20.865222  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:23.364213  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:25.864204  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:27.864798  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:29.866410  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:32.364213  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:34.867419  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:37.364115  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:39.364309  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:41.864533  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:43.864815  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:46.364270  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:48.364389  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:50.863632  268320 pod_ready.go:102] pod "calico-node-lbwbw" in "kube-system" namespace has status "Ready":"False"
	I1031 17:30:52.369504  268320 pod_ready.go:81] duration metric: took 4m0.017606466s waiting for pod "calico-node-lbwbw" in "kube-system" namespace to be "Ready" ...
	E1031 17:30:52.369533  268320 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1031 17:30:52.369545  268320 pod_ready.go:38] duration metric: took 8m0.084591907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:30:52.371968  268320 out.go:177] 
	W1031 17:30:52.373544  268320 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1031 17:30:52.373563  268320 out.go:239] * 
	* 
	W1031 17:30:52.374673  268320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:30:52.376696  268320 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (514.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (330.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:24:37.301052   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136782102s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:24:55.860210   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149203596s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129936982s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132773278s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:25:53.549851   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125293116s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 17:25:59.221789   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133233189s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134655723s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:27:03.605701   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.610984   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.621249   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.641589   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.681874   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.762228   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:03.922338   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:04.242986   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:04.536816   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.542069   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.552367   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.572638   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.612913   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.693259   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.853927   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:04.883142   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:05.174576   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:05.815693   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:06.163398   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:07.096443   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:08.724137   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:09.656592   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:27:12.017023   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137295277s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 17:27:13.845272   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:14.777507   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:27:24.085998   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136750278s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 17:27:39.701058   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:27:44.566657   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:27:45.498370   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135442025s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136401037s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:29:48.379899   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129100419s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (330.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (337.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137267654s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134109625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:28:15.378771   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145205801s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:28:23.051166   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.056455   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.066723   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.087011   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.127289   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.207648   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.368031   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:23.688655   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:24.329384   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:25.527103   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:28:25.610282   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:26.459216   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:28:28.176112   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
E1031 17:28:33.296880   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139719481s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:28:41.813299   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 17:28:43.062526   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:28:43.537183   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140505604s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127566193s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140484327s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 17:29:44.978094   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:29:47.447408   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12874635s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131139203s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:31:06.898650   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126476672s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127660297s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 17:32:03.604954   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:32:04.536139   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
E1031 17:32:12.017380   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:32:31.287961   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/kindnet-171017/client.crt: no such file or directory
E1031 17:32:32.220629   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171016 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130126705s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (337.94s)

                                                
                                    

Test pass (249/277)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.32
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.11
10 TestDownloadOnly/v1.25.3/json-events 6.46
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.11
16 TestDownloadOnly/DeleteAll 0.32
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
18 TestDownloadOnlyKic 3.93
19 TestBinaryMirror 0.96
20 TestOffline 74.85
22 TestAddons/Setup 171.15
24 TestAddons/parallel/Registry 15.72
25 TestAddons/parallel/Ingress 22.33
26 TestAddons/parallel/MetricsServer 5.74
27 TestAddons/parallel/HelmTiller 15.8
29 TestAddons/parallel/CSI 47.01
30 TestAddons/parallel/Headlamp 10.2
31 TestAddons/parallel/CloudSpanner 5.41
33 TestAddons/serial/GCPAuth 42.95
34 TestAddons/StoppedEnableDisable 20.41
35 TestCertOptions 34.85
36 TestCertExpiration 236.01
38 TestForceSystemdFlag 47.29
39 TestForceSystemdEnv 29.79
40 TestKVMDriverInstallOrUpdate 5.26
44 TestErrorSpam/setup 22.54
45 TestErrorSpam/start 0.94
46 TestErrorSpam/status 1.08
47 TestErrorSpam/pause 1.61
48 TestErrorSpam/unpause 1.58
49 TestErrorSpam/stop 1.49
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 44.73
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.69
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.37
61 TestFunctional/serial/CacheCmd/cache/add_local 2.16
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
69 TestFunctional/serial/ExtraConfig 37.05
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.12
72 TestFunctional/serial/LogsFileCmd 1.16
74 TestFunctional/parallel/ConfigCmd 0.6
75 TestFunctional/parallel/DashboardCmd 17.43
76 TestFunctional/parallel/DryRun 0.73
77 TestFunctional/parallel/InternationalLanguage 0.29
78 TestFunctional/parallel/StatusCmd 1.4
81 TestFunctional/parallel/ServiceCmd 9.94
82 TestFunctional/parallel/ServiceCmdConnect 15.72
83 TestFunctional/parallel/AddonsCmd 0.24
84 TestFunctional/parallel/PersistentVolumeClaim 42.88
86 TestFunctional/parallel/SSHCmd 0.85
87 TestFunctional/parallel/CpCmd 1.7
88 TestFunctional/parallel/MySQL 31.34
89 TestFunctional/parallel/FileSync 0.41
90 TestFunctional/parallel/CertSync 2.38
94 TestFunctional/parallel/NodeLabels 0.09
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
98 TestFunctional/parallel/License 0.28
99 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
100 TestFunctional/parallel/MountCmd/any-port 11.68
101 TestFunctional/parallel/ProfileCmd/profile_list 0.54
102 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.26
107 TestFunctional/parallel/MountCmd/specific-port 2.5
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.65
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.6
121 TestFunctional/parallel/ImageCommands/Setup 1.55
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.53
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.6
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.76
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.57
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.95
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.6
132 TestFunctional/delete_addon-resizer_images 0.09
133 TestFunctional/delete_my-image_image 0.02
134 TestFunctional/delete_minikube_cached_images 0.02
137 TestIngressAddonLegacy/StartLegacyK8sCluster 66.41
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.28
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
141 TestIngressAddonLegacy/serial/ValidateIngressAddons 43.54
144 TestJSONOutput/start/Command 44.06
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.68
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.62
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 5.8
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.28
169 TestKicCustomNetwork/create_custom_network 32.11
170 TestKicCustomNetwork/use_default_bridge_network 28.03
171 TestKicExistingNetwork 30.01
172 TestKicCustomSubnet 28.32
173 TestMainNoArgs 0.07
174 TestMinikubeProfile 61.94
177 TestMountStart/serial/StartWithMountFirst 5
178 TestMountStart/serial/VerifyMountFirst 0.32
179 TestMountStart/serial/StartWithMountSecond 4.93
180 TestMountStart/serial/VerifyMountSecond 0.33
181 TestMountStart/serial/DeleteFirst 1.73
182 TestMountStart/serial/VerifyMountPostDelete 0.33
183 TestMountStart/serial/Stop 1.25
184 TestMountStart/serial/RestartStopped 6.53
185 TestMountStart/serial/VerifyMountPostStop 0.32
188 TestMultiNode/serial/FreshStart2Nodes 89.62
189 TestMultiNode/serial/DeployApp2Nodes 4.49
190 TestMultiNode/serial/PingHostFrom2Pods 0.9
191 TestMultiNode/serial/AddNode 42.55
192 TestMultiNode/serial/ProfileList 0.37
193 TestMultiNode/serial/CopyFile 11.63
194 TestMultiNode/serial/StopNode 2.37
195 TestMultiNode/serial/StartAfterStop 31.12
196 TestMultiNode/serial/RestartKeepsNodes 171.94
197 TestMultiNode/serial/DeleteNode 4.94
198 TestMultiNode/serial/StopMultiNode 40.07
199 TestMultiNode/serial/RestartMultiNode 101.48
200 TestMultiNode/serial/ValidateNameConflict 25.35
207 TestScheduledStopUnix 99.27
210 TestInsufficientStorage 15.41
211 TestRunningBinaryUpgrade 93.52
214 TestMissingContainerUpgrade 168.29
216 TestStoppedBinaryUpgrade/Setup 0.58
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
218 TestNoKubernetes/serial/StartWithK8s 40.83
219 TestStoppedBinaryUpgrade/Upgrade 118.09
220 TestNoKubernetes/serial/StartWithStopK8s 18.88
221 TestNoKubernetes/serial/Start 7.41
222 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
223 TestNoKubernetes/serial/ProfileList 3.33
224 TestNoKubernetes/serial/Stop 1.3
225 TestNoKubernetes/serial/StartNoArgs 6.28
226 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
235 TestPause/serial/Start 59.48
236 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
237 TestPause/serial/SecondStartNoReconfiguration 16.18
245 TestNetworkPlugins/group/false 0.62
249 TestPause/serial/Pause 0.88
250 TestPause/serial/VerifyStatus 0.49
251 TestPause/serial/Unpause 0.91
252 TestPause/serial/PauseAgain 1.1
253 TestPause/serial/DeletePaused 5.4
254 TestPause/serial/VerifyDeletedResources 0.5
256 TestStartStop/group/old-k8s-version/serial/FirstStart 127.28
258 TestStartStop/group/no-preload/serial/FirstStart 51.9
259 TestStartStop/group/no-preload/serial/DeployApp 8.33
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
261 TestStartStop/group/no-preload/serial/Stop 20.06
262 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
263 TestStartStop/group/no-preload/serial/SecondStart 315.46
264 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.64
266 TestStartStop/group/old-k8s-version/serial/Stop 20.08
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
268 TestStartStop/group/old-k8s-version/serial/SecondStart 434.63
270 TestStartStop/group/embed-certs/serial/FirstStart 55.27
271 TestStartStop/group/embed-certs/serial/DeployApp 9.35
272 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.67
273 TestStartStop/group/embed-certs/serial/Stop 20.1
274 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
275 TestStartStop/group/embed-certs/serial/SecondStart 309.69
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
279 TestStartStop/group/no-preload/serial/Pause 3.18
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.09
282 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.36
283 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
284 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.11
285 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
286 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 559.96
288 TestStartStop/group/newest-cni/serial/FirstStart 49.07
289 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
291 TestStartStop/group/newest-cni/serial/DeployApp 0
292 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
293 TestStartStop/group/newest-cni/serial/Stop 1.37
294 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
295 TestStartStop/group/newest-cni/serial/SecondStart 31.59
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
297 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
299 TestStartStop/group/old-k8s-version/serial/Pause 3.33
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
301 TestStartStop/group/embed-certs/serial/Pause 3.42
302 TestNetworkPlugins/group/auto/Start 48.45
303 TestNetworkPlugins/group/kindnet/Start 47
304 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
307 TestStartStop/group/newest-cni/serial/Pause 3.37
308 TestNetworkPlugins/group/cilium/Start 101.26
309 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
310 TestNetworkPlugins/group/auto/KubeletFlags 0.38
311 TestNetworkPlugins/group/auto/NetCatPod 10.27
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
313 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
314 TestNetworkPlugins/group/auto/DNS 0.13
315 TestNetworkPlugins/group/auto/Localhost 0.15
316 TestNetworkPlugins/group/auto/HairPin 0.14
318 TestNetworkPlugins/group/kindnet/DNS 0.16
319 TestNetworkPlugins/group/kindnet/Localhost 0.13
320 TestNetworkPlugins/group/kindnet/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 300.91
322 TestNetworkPlugins/group/cilium/ControllerPod 5.02
323 TestNetworkPlugins/group/cilium/KubeletFlags 0.36
324 TestNetworkPlugins/group/cilium/NetCatPod 10.88
325 TestNetworkPlugins/group/cilium/DNS 0.14
326 TestNetworkPlugins/group/cilium/Localhost 0.13
327 TestNetworkPlugins/group/cilium/HairPin 0.13
328 TestNetworkPlugins/group/enable-default-cni/Start 38.99
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
333 TestNetworkPlugins/group/bridge/NetCatPod 9.21
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.15
x
+
TestDownloadOnly/v1.16.0/json-events (13.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-163557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-163557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.319317506s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-163557
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-163557: exit status 85 (104.703365ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-163557 | jenkins | v1.27.1 | 31 Oct 22 16:35 UTC |          |
	|         | -p download-only-163557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 16:35:57
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 16:35:57.358963   10111 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:35:57.359090   10111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:35:57.359095   10111 out.go:309] Setting ErrFile to fd 2...
	I1031 16:35:57.359099   10111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:35:57.359215   10111 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	W1031 16:35:57.359344   10111 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15232-3650/.minikube/config/config.json: open /home/jenkins/minikube-integration/15232-3650/.minikube/config/config.json: no such file or directory
	I1031 16:35:57.360052   10111 out.go:303] Setting JSON to true
	I1031 16:35:57.360968   10111 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1107,"bootTime":1667233050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 16:35:57.361043   10111 start.go:126] virtualization: kvm guest
	I1031 16:35:57.365254   10111 out.go:97] [download-only-163557] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 16:35:57.365420   10111 notify.go:220] Checking for updates...
	I1031 16:35:57.368018   10111 out.go:169] MINIKUBE_LOCATION=15232
	W1031 16:35:57.365468   10111 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 16:35:57.372936   10111 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 16:35:57.376238   10111 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 16:35:57.378890   10111 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 16:35:57.381312   10111 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 16:35:57.385638   10111 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 16:35:57.385891   10111 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 16:35:57.415071   10111 docker.go:137] docker version: linux-20.10.21
	I1031 16:35:57.415183   10111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:35:58.384220   10111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-31 16:35:57.436435042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:35:58.384327   10111 docker.go:254] overlay module found
	I1031 16:35:58.387124   10111 out.go:97] Using the docker driver based on user configuration
	I1031 16:35:58.387153   10111 start.go:282] selected driver: docker
	I1031 16:35:58.387166   10111 start.go:808] validating driver "docker" against <nil>
	I1031 16:35:58.387254   10111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:35:58.503587   10111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-31 16:35:58.407934584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:35:58.503691   10111 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1031 16:35:58.504231   10111 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I1031 16:35:58.504346   10111 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 16:35:58.507243   10111 out.go:169] Using Docker driver with root privileges
	I1031 16:35:58.509075   10111 cni.go:95] Creating CNI manager for ""
	I1031 16:35:58.509104   10111 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 16:35:58.509131   10111 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1031 16:35:58.509140   10111 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1031 16:35:58.509145   10111 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 16:35:58.509171   10111 start_flags.go:317] config:
	{Name:download-only-163557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-163557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 16:35:58.511382   10111 out.go:97] Starting control plane node download-only-163557 in cluster download-only-163557
	I1031 16:35:58.511418   10111 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 16:35:58.513179   10111 out.go:97] Pulling base image ...
	I1031 16:35:58.513240   10111 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1031 16:35:58.513322   10111 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 16:35:58.536049   10111 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1031 16:35:58.536385   10111 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1031 16:35:58.536499   10111 image.go:120] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1031 16:35:58.611247   10111 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1031 16:35:58.611285   10111 cache.go:57] Caching tarball of preloaded images
	I1031 16:35:58.611472   10111 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1031 16:35:58.614593   10111 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 16:35:58.614637   10111 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1031 16:35:58.718486   10111 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1031 16:36:01.392252   10111 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1031 16:36:01.392363   10111 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1031 16:36:02.342078   10111 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1031 16:36:02.342435   10111 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/download-only-163557/config.json ...
	I1031 16:36:02.342475   10111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/download-only-163557/config.json: {Name:mk3b8fde201a2011def08408c6a7154be1e2fd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 16:36:02.342692   10111 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1031 16:36:02.342946   10111 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-163557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-163557 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-163557 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.462236067s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-163557
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-163557: exit status 85 (105.638868ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-163557 | jenkins | v1.27.1 | 31 Oct 22 16:35 UTC |          |
	|         | -p download-only-163557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-163557 | jenkins | v1.27.1 | 31 Oct 22 16:36 UTC |          |
	|         | -p download-only-163557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 16:36:10
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 16:36:10.792846   10279 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:36:10.793006   10279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:36:10.793017   10279 out.go:309] Setting ErrFile to fd 2...
	I1031 16:36:10.793025   10279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:36:10.793144   10279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	W1031 16:36:10.793285   10279 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15232-3650/.minikube/config/config.json: open /home/jenkins/minikube-integration/15232-3650/.minikube/config/config.json: no such file or directory
	I1031 16:36:10.793781   10279 out.go:303] Setting JSON to true
	I1031 16:36:10.794630   10279 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1121,"bootTime":1667233050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 16:36:10.794710   10279 start.go:126] virtualization: kvm guest
	I1031 16:36:10.798188   10279 out.go:97] [download-only-163557] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 16:36:10.798381   10279 notify.go:220] Checking for updates...
	I1031 16:36:10.800868   10279 out.go:169] MINIKUBE_LOCATION=15232
	I1031 16:36:10.803320   10279 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 16:36:10.805776   10279 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 16:36:10.808154   10279 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 16:36:10.810430   10279 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 16:36:10.814336   10279 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 16:36:10.815831   10279 config.go:180] Loaded profile config "download-only-163557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1031 16:36:10.815920   10279 start.go:716] api.Load failed for download-only-163557: filestore "download-only-163557": Docker machine "download-only-163557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 16:36:10.816088   10279 driver.go:365] Setting default libvirt URI to qemu:///system
	W1031 16:36:10.816216   10279 start.go:716] api.Load failed for download-only-163557: filestore "download-only-163557": Docker machine "download-only-163557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 16:36:10.845432   10279 docker.go:137] docker version: linux-20.10.21
	I1031 16:36:10.845528   10279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:36:10.952363   10279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-31 16:36:10.865991939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:36:10.952480   10279 docker.go:254] overlay module found
	I1031 16:36:10.954980   10279 out.go:97] Using the docker driver based on existing profile
	I1031 16:36:10.955009   10279 start.go:282] selected driver: docker
	I1031 16:36:10.955024   10279 start.go:808] validating driver "docker" against &{Name:download-only-163557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-163557 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 16:36:10.955195   10279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:36:11.063387   10279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-31 16:36:10.975446706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:36:11.064028   10279 cni.go:95] Creating CNI manager for ""
	I1031 16:36:11.064052   10279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1031 16:36:11.064098   10279 start_flags.go:317] config:
	{Name:download-only-163557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-163557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 16:36:11.066708   10279 out.go:97] Starting control plane node download-only-163557 in cluster download-only-163557
	I1031 16:36:11.066743   10279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1031 16:36:11.068629   10279 out.go:97] Pulling base image ...
	I1031 16:36:11.068674   10279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 16:36:11.068797   10279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1031 16:36:11.090858   10279 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1031 16:36:11.091129   10279 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1031 16:36:11.091159   10279 image.go:63] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1031 16:36:11.091163   10279 image.go:104] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1031 16:36:11.091183   10279 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	I1031 16:36:11.169020   10279 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 16:36:11.169053   10279 cache.go:57] Caching tarball of preloaded images
	I1031 16:36:11.169243   10279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 16:36:11.172303   10279 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1031 16:36:11.172340   10279 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1031 16:36:11.273434   10279 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 16:36:15.044193   10279 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1031 16:36:15.044322   10279 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-163557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-163557
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-163618 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-163618 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.323077043s)
helpers_test.go:175: Cleaning up "download-docker-163618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-163618
--- PASS: TestDownloadOnlyKic (3.93s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-163621 --alsologtostderr --binary-mirror http://127.0.0.1:32897 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-163621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-163621
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (74.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-170744 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-170744 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m12.431540389s)
helpers_test.go:175: Cleaning up "offline-containerd-170744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-170744

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-170744: (2.415088802s)
--- PASS: TestOffline (74.85s)

                                                
                                    
x
+
TestAddons/Setup (171.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-163622 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-163622 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m51.15212511s)
--- PASS: TestAddons/Setup (171.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 13.630868ms
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-jlt6b" [2a5d0606-0f4b-4f84-bf78-7dbf44117191] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00876808s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-mlb6d" [3bb00102-5543-4968-afc6-596aa38a4de7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009469612s
addons_test.go:293: (dbg) Run:  kubectl --context addons-163622 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-163622 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-163622 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.803735008s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 ip
2022/10/31 16:39:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-163622 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-163622 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-163622 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [d741bef4-eca5-4243-a300-e8cbe87c4940] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [d741bef4-eca5-4243-a300-e8cbe87c4940] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008528736s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context addons-163622 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-163622 addons disable ingress-dns --alsologtostderr -v=1: (1.752960498s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-163622 addons disable ingress --alsologtostderr -v=1: (7.574506048s)
--- PASS: TestAddons/parallel/Ingress (22.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 10.739322ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-769cd898cd-wb9ns" [ae7c6f27-cf64-4efd-8d14-d1bc274f1968] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009674838s
addons_test.go:368: (dbg) Run:  kubectl --context addons-163622 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 2.506439ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-vbkc7" [b8c86be6-496a-4f25-a288-ad4234eadf09] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.05588216s
addons_test.go:426: (dbg) Run:  kubectl --context addons-163622 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-163622 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.081869641s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 6.170225ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-163622 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-163622 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-163622 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [6ad61668-9ca6-4962-b9ae-c7578642294c] Pending
helpers_test.go:342: "task-pv-pod" [6ad61668-9ca6-4962-b9ae-c7578642294c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [6ad61668-9ca6-4962-b9ae-c7578642294c] Running
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.007254806s
addons_test.go:537: (dbg) Run:  kubectl --context addons-163622 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-163622 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-163622 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-163622 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:553: (dbg) Run:  kubectl --context addons-163622 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-163622 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-163622 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-163622 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [b8ffcf5b-c1a3-4b7e-8c61-567023122696] Pending
helpers_test.go:342: "task-pv-pod-restore" [b8ffcf5b-c1a3-4b7e-8c61-567023122696] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [b8ffcf5b-c1a3-4b7e-8c61-567023122696] Running
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.007116612s
addons_test.go:579: (dbg) Run:  kubectl --context addons-163622 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-163622 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-163622 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-163622 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.081246388s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-163622 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-163622 --alsologtostderr -v=1: (1.12640621s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-k7ffn" [fe6f563d-a189-481d-9757-0365c466fa98] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-k7ffn" [fe6f563d-a189-481d-9757-0365c466fa98] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.067458348s
--- PASS: TestAddons/parallel/Headlamp (10.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-44m2h" [f66f9de5-c586-4220-ad06-185b25f6a872] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007429986s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-163622
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (42.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-163622 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-163622 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5aa52b98-02dc-4ef7-b4af-75b969c9eefe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5aa52b98-02dc-4ef7-b4af-75b969c9eefe] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.007156462s
addons_test.go:625: (dbg) Run:  kubectl --context addons-163622 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-163622 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-163622 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-163622 addons disable gcp-auth --alsologtostderr -v=1: (6.262164249s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-163622 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-163622 addons enable gcp-auth: (2.174928949s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-163622 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-tbvff" [a0b3c77f-5afc-425e-9499-831d7ad0857f] Pending
helpers_test.go:342: "private-image-5c86c669bd-tbvff" [a0b3c77f-5afc-425e-9499-831d7ad0857f] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-tbvff" [a0b3c77f-5afc-425e-9499-831d7ad0857f] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.009608852s
addons_test.go:723: (dbg) Run:  kubectl --context addons-163622 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-w92rq" [93be33bb-221e-488e-828e-f53ada029d71] Pending
helpers_test.go:342: "private-image-eu-64c96f687b-w92rq" [93be33bb-221e-488e-828e-f53ada029d71] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-w92rq" [93be33bb-221e-488e-828e-f53ada029d71] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.009604131s
--- PASS: TestAddons/serial/GCPAuth (42.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-163622
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-163622: (20.173687759s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-163622
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-163622
--- PASS: TestAddons/StoppedEnableDisable (20.41s)

                                                
                                    
x
+
TestCertOptions (34.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-171033 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1031 17:10:53.549521   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-171033 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.920196026s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-171033 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171033 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-171033 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-171033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-171033: (2.104946872s)
--- PASS: TestCertOptions (34.85s)

                                                
                                    
x
+
TestCertExpiration (236.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171023 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171023 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.898219299s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171023 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1031 17:14:14.061995   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171023 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.774912964s)
helpers_test.go:175: Cleaning up "cert-expiration-171023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-171023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-171023: (2.335647444s)
--- PASS: TestCertExpiration (236.01s)

                                                
                                    
x
+
TestForceSystemdFlag (47.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-171032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-171032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.824038603s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-171032 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-171032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-171032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-171032: (2.088926227s)
--- PASS: TestForceSystemdFlag (47.29s)

                                                
                                    
x
+
TestForceSystemdEnv (29.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-170946 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-170946 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.697846843s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-170946 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-170946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-170946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-170946: (2.623574068s)
--- PASS: TestForceSystemdEnv (29.79s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.26s)

                                                
                                    
x
+
TestErrorSpam/setup (22.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-164116 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164116 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-164116 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164116 --driver=docker  --container-runtime=containerd: (22.539770514s)
--- PASS: TestErrorSpam/setup (22.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 stop: (1.251426477s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164116 --log_dir /tmp/nospam-164116 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/test/nested/copy/10097/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-164150 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (44.728463673s)
--- PASS: TestFunctional/serial/StartWithProxy (44.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-164150 --alsologtostderr -v=8: (15.68634848s)
functional_test.go:656: soft start took 15.687054395s for "functional-164150" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-164150 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:3.1: (1.563074302s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:3.3: (1.576686724s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 cache add k8s.gcr.io/pause:latest: (1.234669104s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-164150 /tmp/TestFunctionalserialCacheCmdcacheadd_local826885776/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache add minikube-local-cache-test:functional-164150
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 cache add minikube-local-cache-test:functional-164150: (1.904155237s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache delete minikube-local-cache-test:functional-164150
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-164150
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (339.013173ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 cache reload: (1.18301073s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 kubectl -- --context functional-164150 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-164150 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-164150 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.049886509s)
functional_test.go:754: restart took 37.050002033s for "functional-164150" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-164150 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 logs: (1.124583594s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 logs --file /tmp/TestFunctionalserialLogsFileCmd3416506455/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 logs --file /tmp/TestFunctionalserialLogsFileCmd3416506455/001/logs.txt: (1.162256877s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 config get cpus: exit status 14 (85.297479ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 config get cpus: exit status 14 (103.453905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-164150 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-164150 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 42492: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-164150 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (300.568356ms)

                                                
                                                
-- stdout --
	* [functional-164150] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 16:43:41.055804   41247 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:43:41.055969   41247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:43:41.055983   41247 out.go:309] Setting ErrFile to fd 2...
	I1031 16:43:41.055990   41247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:43:41.056176   41247 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 16:43:41.056871   41247 out.go:303] Setting JSON to false
	I1031 16:43:41.058245   41247 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1571,"bootTime":1667233050,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 16:43:41.058339   41247 start.go:126] virtualization: kvm guest
	I1031 16:43:41.061113   41247 out.go:177] * [functional-164150] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 16:43:41.062770   41247 notify.go:220] Checking for updates...
	I1031 16:43:41.064162   41247 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 16:43:41.065807   41247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 16:43:41.067317   41247 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 16:43:41.068754   41247 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 16:43:41.070153   41247 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 16:43:41.072240   41247 config.go:180] Loaded profile config "functional-164150": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 16:43:41.072816   41247 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 16:43:41.105828   41247 docker.go:137] docker version: linux-20.10.21
	I1031 16:43:41.105950   41247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:43:41.234819   41247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:41 SystemTime:2022-10-31 16:43:41.129443994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:43:41.234977   41247 docker.go:254] overlay module found
	I1031 16:43:41.238938   41247 out.go:177] * Using the docker driver based on existing profile
	I1031 16:43:41.240282   41247 start.go:282] selected driver: docker
	I1031 16:43:41.240312   41247 start.go:808] validating driver "docker" against &{Name:functional-164150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-164150 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 16:43:41.240472   41247 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 16:43:41.243322   41247 out.go:177] 
	W1031 16:43:41.245140   41247 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 16:43:41.246746   41247 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164150 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-164150 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (291.267128ms)

                                                
                                                
-- stdout --
	* [functional-164150] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 16:43:40.742098   41022 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:43:40.742269   41022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:43:40.742282   41022 out.go:309] Setting ErrFile to fd 2...
	I1031 16:43:40.742289   41022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:43:40.742518   41022 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 16:43:40.743200   41022 out.go:303] Setting JSON to false
	I1031 16:43:40.744512   41022 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1571,"bootTime":1667233050,"procs":434,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 16:43:40.744601   41022 start.go:126] virtualization: kvm guest
	I1031 16:43:40.747524   41022 out.go:177] * [functional-164150] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	I1031 16:43:40.749381   41022 notify.go:220] Checking for updates...
	I1031 16:43:40.751210   41022 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 16:43:40.753068   41022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 16:43:40.754837   41022 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 16:43:40.756569   41022 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 16:43:40.758250   41022 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 16:43:40.762243   41022 config.go:180] Loaded profile config "functional-164150": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 16:43:40.762813   41022 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 16:43:40.794526   41022 docker.go:137] docker version: linux-20.10.21
	I1031 16:43:40.794629   41022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:43:40.934781   41022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 16:43:40.818801183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:43:40.934915   41022 docker.go:254] overlay module found
	I1031 16:43:40.938542   41022 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1031 16:43:40.939928   41022 start.go:282] selected driver: docker
	I1031 16:43:40.939958   41022 start.go:808] validating driver "docker" against &{Name:functional-164150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-164150 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 16:43:40.940137   41022 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 16:43:40.942796   41022 out.go:177] 
	W1031 16:43:40.944374   41022 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 16:43:40.945864   41022 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (9.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-164150 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-164150 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-b8456" [ce1e7c33-6a4c-4a08-97ca-9b0e2d6ddfc1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1031 16:44:15.341100   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
helpers_test.go:342: "hello-node-5fcdfb5cc4-b8456" [ce1e7c33-6a4c-4a08-97ca-9b0e2d6ddfc1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.006803655s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 service list
functional_test.go:1449: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 service list: (1.776017854s)
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.49.2:31774
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:31774
--- PASS: TestFunctional/parallel/ServiceCmd (9.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-164150 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-164150 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-sdsq2" [0506431b-57cb-4422-9fa9-991f25587707] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2022/10/31 16:43:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-sdsq2" [0506431b-57cb-4422-9fa9-991f25587707] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.006313929s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31873
functional_test.go:1605: http://192.168.49.2:31873: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-sdsq2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31873
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [6e2ec56e-aea1-45dd-8ca7-4aaff217da27] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015665956s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-164150 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-164150 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-164150 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-164150 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-164150 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f489685b-f7d5-41bf-96c8-fe7b43bf5db3] Pending
helpers_test.go:342: "sp-pod" [f489685b-f7d5-41bf-96c8-fe7b43bf5db3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f489685b-f7d5-41bf-96c8-fe7b43bf5db3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.008209063s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-164150 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-164150 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-164150 delete -f testdata/storage-provisioner/pod.yaml: (1.368219589s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-164150 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [cbc48471-47fb-4981-bfe3-f1bf2c298a3f] Pending
helpers_test.go:342: "sp-pod" [cbc48471-47fb-4981-bfe3-f1bf2c298a3f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [cbc48471-47fb-4981-bfe3-f1bf2c298a3f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.008211828s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-164150 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh -n functional-164150 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 cp functional-164150:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd64951471/001/cp-test.txt
E1031 16:44:14.061348   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh -n functional-164150 "sudo cat /home/docker/cp-test.txt"
E1031 16:44:14.218916   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:44:14.379332   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-164150 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-ffnn5" [6e85e2a1-7bfe-4737-8ee2-42f4c60784f4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-ffnn5" [6e85e2a1-7bfe-4737-8ee2-42f4c60784f4] Running
E1031 16:44:19.182346   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.007808158s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;"
E1031 16:44:24.303244   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;": exit status 1 (200.85346ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;": exit status 1 (155.662416ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;": exit status 1 (123.137246ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-164150 exec mysql-596b7fcdbf-ffnn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10097/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /etc/test/nested/copy/10097/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10097.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /etc/ssl/certs/10097.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10097.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /usr/share/ca-certificates/10097.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/100972.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /etc/ssl/certs/100972.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/100972.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /usr/share/ca-certificates/100972.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-164150 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh "sudo systemctl is-active docker": exit status 1 (392.47601ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh "sudo systemctl is-active crio": exit status 1 (427.716638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164150 /tmp/TestFunctionalparallelMountCmdany-port201336412/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667234620095833054" to /tmp/TestFunctionalparallelMountCmdany-port201336412/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667234620095833054" to /tmp/TestFunctionalparallelMountCmdany-port201336412/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667234620095833054" to /tmp/TestFunctionalparallelMountCmdany-port201336412/001/test-1667234620095833054
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (457.882739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 16:43 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 16:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 16:43 test-1667234620095833054
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh cat /mount-9p/test-1667234620095833054

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-164150 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [a5ab2464-5190-48ab-aec1-0b8a2a0d17ab] Pending
helpers_test.go:342: "busybox-mount" [a5ab2464-5190-48ab-aec1-0b8a2a0d17ab] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [a5ab2464-5190-48ab-aec1-0b8a2a0d17ab] Running
helpers_test.go:342: "busybox-mount" [a5ab2464-5190-48ab-aec1-0b8a2a0d17ab] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [a5ab2464-5190-48ab-aec1-0b8a2a0d17ab] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.0067977s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-164150 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164150 /tmp/TestFunctionalparallelMountCmdany-port201336412/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "432.586132ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "104.58504ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "442.402338ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "107.178269ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-164150 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-164150 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [7f6d8c88-bd77-497b-a3d5-010a92ca9b18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [7f6d8c88-bd77-497b-a3d5-010a92ca9b18] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.010214077s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164150 /tmp/TestFunctionalparallelMountCmdspecific-port2948766915/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.844172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164150 /tmp/TestFunctionalparallelMountCmdspecific-port2948766915/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh "sudo umount -f /mount-9p": exit status 1 (430.835177ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-164150 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164150 /tmp/TestFunctionalparallelMountCmdspecific-port2948766915/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-164150 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.110.109.77 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-164150 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164150 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-164150
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-164150
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164150 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| localhost/my-image                          | functional-164150  | sha256:d5bb88 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/library/nginx                     | latest             | sha256:76c69f | 56.8MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| docker.io/library/minikube-local-cache-test | functional-164150  | sha256:eeabaa | 1.74kB |
| gcr.io/google-containers/addon-resizer      | functional-164150  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| docker.io/library/mysql                     | 5.7                | sha256:149052 | 144MB  |
| docker.io/library/nginx                     | alpine             | sha256:b99730 | 10.2MB |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164150 image ls --format json:
[{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"],"re
poTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functi
onal-164150"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"20265805"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22
d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238","repoDigests":["docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04"],"repoTags":["docker.io/library/mysql:5.7"],"size":"144343859"},{"id":"sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":["docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10243852"},{"id":"sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":["docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f"],"repoTags":["docker.io/library/nginx:latest"],"size":"56841090"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df
904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:eeabaad05cc4e2e8a2934037c5859c4ba64dc3b98ef6cb2bb0b17fc762c3e6cd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-164150"],"size":"1736"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:d5bb88dfe3cc4195c871c0b6b98e735d75ee336333
3fc997aa6fe3e046088cbc","repoDigests":[],"repoTags":["localhost/my-image:functional-164150"],"size":"775254"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164150 image ls --format yaml:
- id: sha256:eeabaad05cc4e2e8a2934037c5859c4ba64dc3b98ef6cb2bb0b17fc762c3e6cd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-164150
size: "1736"
- id: sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238
repoDigests:
- docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04
repoTags:
- docker.io/library/mysql:5.7
size: "144343859"
- id: sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests:
- docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3
repoTags:
- docker.io/library/nginx:alpine
size: "10243852"
- id: sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests:
- docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
repoTags:
- docker.io/library/nginx:latest
size: "56841090"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-164150
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164150 ssh pgrep buildkitd: exit status 1 (337.385916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image build -t localhost/my-image:functional-164150 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image build -t localhost/my-image:functional-164150 testdata/build: (4.011637169s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164150 image build -t localhost/my-image:functional-164150 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.2s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:52378cf58182d0d66b616fcb45ef6a6f49e12274b8fb702ab0d7e497bfe511ae 0.0s done
#8 exporting config sha256:d5bb88dfe3cc4195c871c0b6b98e735d75ee3363333fc997aa6fe3e046088cbc done
#8 naming to localhost/my-image:functional-164150 done
#8 DONE 0.1s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.510861698s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-164150
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150: (5.276527635s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150: (4.372586397s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.279171051s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-164150
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image load --daemon gcr.io/google-containers/addon-resizer:functional-164150: (5.147248037s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image save gcr.io/google-containers/addon-resizer:functional-164150 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image save gcr.io/google-containers/addon-resizer:functional-164150 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.56882617s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image rm gcr.io/google-containers/addon-resizer:functional-164150
E1031 16:44:14.067232   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:44:14.077540   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:44:14.097854   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:44:14.138145   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
E1031 16:44:14.700343   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.617881132s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image ls
E1031 16:44:16.621560   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-164150
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-164150 image save --daemon gcr.io/google-containers/addon-resizer:functional-164150
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-164150 image save --daemon gcr.io/google-containers/addon-resizer:functional-164150: (1.54988292s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-164150
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-164150
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-164150
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-164150
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (66.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-164433 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1031 16:44:34.543703   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:44:55.024543   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:45:35.984731   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-164433 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m6.411922881s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (66.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons enable ingress --alsologtostderr -v=5: (13.284574717s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (43.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-164433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-164433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.118600166s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-164433 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-164433 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c84f2ff3-50b5-4b45-aa11-7b843e5339d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [c84f2ff3-50b5-4b45-aa11-7b843e5339d7] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005736867s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-164433 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons disable ingress-dns --alsologtostderr -v=1: (13.896643319s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-164433 addons disable ingress --alsologtostderr -v=1: (7.250022235s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (43.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-164639 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1031 16:46:57.905352   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-164639 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (44.05586805s)
--- PASS: TestJSONOutput/start/Command (44.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-164639 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-164639 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-164639 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-164639 --output=json --user=testUser: (5.802834604s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-164735 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-164735 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.948285ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba1b54b5-8089-43ff-b59d-fe543208a3a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-164735] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22379e75-254b-449c-9bca-421af8add0ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"68e845dd-a8dd-4d9d-84a8-ca3f703f9f07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"710ed99a-1a3a-4cf4-b044-e9a462b9c9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig"}}
	{"specversion":"1.0","id":"fa5f59a1-b61e-4ec9-b532-0e96dc014bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube"}}
	{"specversion":"1.0","id":"546003e4-77ac-4c41-87d4-6065b334874e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8a681efd-b259-4429-b7c5-8a30816e3826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-164735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-164735
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-164736 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-164736 --network=: (29.935564579s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-164736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-164736
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-164736: (2.150717647s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-164808 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-164808 --network=bridge: (26.031895554s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-164808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-164808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-164808: (1.974216339s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.03s)

                                                
                                    
x
+
TestKicExistingNetwork (30.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-164836 --network=existing-network
E1031 16:48:41.813167   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:41.818519   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:41.828780   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:41.849090   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:41.889394   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:41.969738   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:42.130172   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:42.450799   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:43.091749   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:44.372393   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:46.932993   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:48:52.053217   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:49:02.293539   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-164836 --network=existing-network: (27.778443631s)
helpers_test.go:175: Cleaning up "existing-network-164836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-164836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-164836: (2.061120619s)
--- PASS: TestKicExistingNetwork (30.01s)

                                                
                                    
x
+
TestKicCustomSubnet (28.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-164906 --subnet=192.168.60.0/24
E1031 16:49:14.061737   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 16:49:22.774378   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-164906 --subnet=192.168.60.0/24: (26.192073127s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-164906 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-164906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-164906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-164906: (2.100277115s)
--- PASS: TestKicCustomSubnet (28.32s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (61.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-164934 --driver=docker  --container-runtime=containerd
E1031 16:49:41.746275   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-164934 --driver=docker  --container-runtime=containerd: (22.900592268s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-164934 --driver=docker  --container-runtime=containerd
E1031 16:50:03.734547   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-164934 --driver=docker  --container-runtime=containerd: (33.721502707s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-164934
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-164934
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-164934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-164934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-164934: (1.915706679s)
helpers_test.go:175: Cleaning up "first-164934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-164934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-164934: (2.171581527s)
--- PASS: TestMinikubeProfile (61.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-165036 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-165036 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.0025806s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165036 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165036 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165036 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.932267559s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165036 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-165036 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-165036 --alsologtostderr -v=5: (1.72763477s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165036 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-165036
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-165036: (1.25132917s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165036
E1031 16:50:53.550186   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.555498   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.565834   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.586197   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.626513   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.706842   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:53.867352   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:54.188100   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:54.829056   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:50:56.110141   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165036: (5.525930486s)
--- PASS: TestMountStart/serial/RestartStopped (6.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165036 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165059 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1031 16:51:03.792221   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:51:14.033281   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:51:25.654920   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:51:34.514467   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:52:15.475597   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165059 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m29.085757621s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-165059 -- rollout status deployment/busybox: (2.690944395s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-2pm2c -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-48mqw -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-2pm2c -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-48mqw -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-2pm2c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-48mqw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-2pm2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-2pm2c -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-48mqw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-165059 -- exec busybox-65db55d5d6-48mqw -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-165059 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-165059 -v 3 --alsologtostderr: (41.841202251s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.55s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp testdata/cp-test.txt multinode-165059:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2121854473/001/cp-test_multinode-165059.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059:/home/docker/cp-test.txt multinode-165059-m02:/home/docker/cp-test_multinode-165059_multinode-165059-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test_multinode-165059_multinode-165059-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059:/home/docker/cp-test.txt multinode-165059-m03:/home/docker/cp-test_multinode-165059_multinode-165059-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test_multinode-165059_multinode-165059-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp testdata/cp-test.txt multinode-165059-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2121854473/001/cp-test_multinode-165059-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m02:/home/docker/cp-test.txt multinode-165059:/home/docker/cp-test_multinode-165059-m02_multinode-165059.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test_multinode-165059-m02_multinode-165059.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m02:/home/docker/cp-test.txt multinode-165059-m03:/home/docker/cp-test_multinode-165059-m02_multinode-165059-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test_multinode-165059-m02_multinode-165059-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp testdata/cp-test.txt multinode-165059-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2121854473/001/cp-test_multinode-165059-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt multinode-165059:/home/docker/cp-test_multinode-165059-m03_multinode-165059.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059 "sudo cat /home/docker/cp-test_multinode-165059-m03_multinode-165059.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt multinode-165059-m02:/home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 ssh -n multinode-165059-m02 "sudo cat /home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-165059 node stop m03: (1.246376819s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165059 status: exit status 7 (564.275608ms)

                                                
                                                
-- stdout --
	multinode-165059
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-165059-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-165059-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr: exit status 7 (555.205573ms)

                                                
                                                
-- stdout --
	multinode-165059
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-165059-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-165059-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 16:53:30.846732  101173 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:53:30.846868  101173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:53:30.846879  101173 out.go:309] Setting ErrFile to fd 2...
	I1031 16:53:30.846883  101173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:53:30.846986  101173 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 16:53:30.847136  101173 out.go:303] Setting JSON to false
	I1031 16:53:30.847166  101173 mustload.go:65] Loading cluster: multinode-165059
	I1031 16:53:30.847207  101173 notify.go:220] Checking for updates...
	I1031 16:53:30.847523  101173 config.go:180] Loaded profile config "multinode-165059": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 16:53:30.847542  101173 status.go:255] checking status of multinode-165059 ...
	I1031 16:53:30.848042  101173 cli_runner.go:164] Run: docker container inspect multinode-165059 --format={{.State.Status}}
	I1031 16:53:30.878841  101173 status.go:330] multinode-165059 host status = "Running" (err=<nil>)
	I1031 16:53:30.878879  101173 host.go:66] Checking if "multinode-165059" exists ...
	I1031 16:53:30.879142  101173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-165059
	I1031 16:53:30.903301  101173 host.go:66] Checking if "multinode-165059" exists ...
	I1031 16:53:30.903567  101173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 16:53:30.903608  101173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-165059
	I1031 16:53:30.928568  101173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/multinode-165059/id_rsa Username:docker}
	I1031 16:53:31.008939  101173 ssh_runner.go:195] Run: systemctl --version
	I1031 16:53:31.012453  101173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 16:53:31.021687  101173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 16:53:31.119452  101173 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-10-31 16:53:31.04207431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 16:53:31.119958  101173 kubeconfig.go:92] found "multinode-165059" server: "https://192.168.58.2:8443"
	I1031 16:53:31.119983  101173 api_server.go:165] Checking apiserver status ...
	I1031 16:53:31.120012  101173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 16:53:31.129114  101173 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	I1031 16:53:31.136211  101173 api_server.go:181] apiserver freezer: "5:freezer:/docker/ad6613173d6971c7357d5013312fe09da69f02a449d3c1786382dd75318666b4/kubepods/burstable/podb45cb1d09ea2ef7b06da5bbc3a5f10ee/d6bc474930b9c0673a76d9c4d3de7d85b05d7253ce766c21f530cada4b82624a"
	I1031 16:53:31.136281  101173 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ad6613173d6971c7357d5013312fe09da69f02a449d3c1786382dd75318666b4/kubepods/burstable/podb45cb1d09ea2ef7b06da5bbc3a5f10ee/d6bc474930b9c0673a76d9c4d3de7d85b05d7253ce766c21f530cada4b82624a/freezer.state
	I1031 16:53:31.142589  101173 api_server.go:203] freezer state: "THAWED"
	I1031 16:53:31.142621  101173 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1031 16:53:31.147228  101173 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1031 16:53:31.147261  101173 status.go:421] multinode-165059 apiserver status = Running (err=<nil>)
	I1031 16:53:31.147274  101173 status.go:257] multinode-165059 status: &{Name:multinode-165059 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 16:53:31.147298  101173 status.go:255] checking status of multinode-165059-m02 ...
	I1031 16:53:31.147615  101173 cli_runner.go:164] Run: docker container inspect multinode-165059-m02 --format={{.State.Status}}
	I1031 16:53:31.170967  101173 status.go:330] multinode-165059-m02 host status = "Running" (err=<nil>)
	I1031 16:53:31.170997  101173 host.go:66] Checking if "multinode-165059-m02" exists ...
	I1031 16:53:31.171294  101173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-165059-m02
	I1031 16:53:31.196426  101173 host.go:66] Checking if "multinode-165059-m02" exists ...
	I1031 16:53:31.196660  101173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 16:53:31.196701  101173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-165059-m02
	I1031 16:53:31.219973  101173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/multinode-165059-m02/id_rsa Username:docker}
	I1031 16:53:31.300573  101173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 16:53:31.310283  101173 status.go:257] multinode-165059-m02 status: &{Name:multinode-165059-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 16:53:31.310324  101173 status.go:255] checking status of multinode-165059-m03 ...
	I1031 16:53:31.310679  101173 cli_runner.go:164] Run: docker container inspect multinode-165059-m03 --format={{.State.Status}}
	I1031 16:53:31.334679  101173 status.go:330] multinode-165059-m03 host status = "Stopped" (err=<nil>)
	I1031 16:53:31.334707  101173 status.go:343] host is not running, skipping remaining checks
	I1031 16:53:31.334713  101173 status.go:257] multinode-165059-m03 status: &{Name:multinode-165059-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 node start m03 --alsologtostderr
E1031 16:53:37.395771   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:53:41.813489   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-165059 node start m03 --alsologtostderr: (30.334070579s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (171.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165059
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-165059
E1031 16:54:09.496817   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:54:14.062100   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-165059: (41.028471767s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165059 --wait=true -v=8 --alsologtostderr
E1031 16:55:53.549549   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 16:56:21.236272   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165059 --wait=true -v=8 --alsologtostderr: (2m10.763844966s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165059
--- PASS: TestMultiNode/serial/RestartKeepsNodes (171.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-165059 node delete m03: (4.273115411s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-165059 stop: (39.83428818s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165059 status: exit status 7 (122.915094ms)

                                                
                                                
-- stdout --
	multinode-165059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-165059-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr: exit status 7 (116.88218ms)

                                                
                                                
-- stdout --
	multinode-165059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-165059-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 16:57:39.363975  111898 out.go:296] Setting OutFile to fd 1 ...
	I1031 16:57:39.364116  111898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:57:39.364127  111898 out.go:309] Setting ErrFile to fd 2...
	I1031 16:57:39.364132  111898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 16:57:39.364260  111898 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 16:57:39.364429  111898 out.go:303] Setting JSON to false
	I1031 16:57:39.364460  111898 mustload.go:65] Loading cluster: multinode-165059
	I1031 16:57:39.364492  111898 notify.go:220] Checking for updates...
	I1031 16:57:39.364815  111898 config.go:180] Loaded profile config "multinode-165059": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 16:57:39.364832  111898 status.go:255] checking status of multinode-165059 ...
	I1031 16:57:39.365213  111898 cli_runner.go:164] Run: docker container inspect multinode-165059 --format={{.State.Status}}
	I1031 16:57:39.387767  111898 status.go:330] multinode-165059 host status = "Stopped" (err=<nil>)
	I1031 16:57:39.387800  111898 status.go:343] host is not running, skipping remaining checks
	I1031 16:57:39.387808  111898 status.go:257] multinode-165059 status: &{Name:multinode-165059 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 16:57:39.387846  111898 status.go:255] checking status of multinode-165059-m02 ...
	I1031 16:57:39.388188  111898 cli_runner.go:164] Run: docker container inspect multinode-165059-m02 --format={{.State.Status}}
	I1031 16:57:39.411321  111898 status.go:330] multinode-165059-m02 host status = "Stopped" (err=<nil>)
	I1031 16:57:39.411350  111898 status.go:343] host is not running, skipping remaining checks
	I1031 16:57:39.411359  111898 status.go:257] multinode-165059-m02 status: &{Name:multinode-165059-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165059 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1031 16:58:41.813572   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 16:59:14.061328   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165059 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m40.796877244s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-165059 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (101.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-165059
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165059-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-165059-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.784694ms)

                                                
                                                
-- stdout --
	* [multinode-165059-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-165059-m02' is duplicated with machine name 'multinode-165059-m02' in profile 'multinode-165059'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-165059-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-165059-m03 --driver=docker  --container-runtime=containerd: (22.900025359s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-165059
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-165059: exit status 80 (336.731193ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-165059
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-165059-m03 already exists in multinode-165059-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-165059-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-165059-m03: (1.950098748s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.35s)

                                                
                                    
x
+
TestScheduledStopUnix (99.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-170549 --memory=2048 --driver=docker  --container-runtime=containerd
E1031 17:05:53.550581   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-170549 --memory=2048 --driver=docker  --container-runtime=containerd: (22.559224785s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170549 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-170549 -n scheduled-stop-170549
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170549 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170549 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170549 -n scheduled-stop-170549
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-170549
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170549 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1031 17:07:16.597164   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-170549
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-170549: exit status 7 (93.601328ms)

                                                
                                                
-- stdout --
	scheduled-stop-170549
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170549 -n scheduled-stop-170549
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170549 -n scheduled-stop-170549: exit status 7 (96.428191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-170549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-170549
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-170549: (4.958072143s)
--- PASS: TestScheduledStopUnix (99.27s)

                                                
                                    
x
+
TestInsufficientStorage (15.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-170728 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-170728 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.819881479s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a2b5abc-6fa1-4f8f-9d90-0d3a11cb9732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-170728] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2240fa16-b550-4221-a6d3-593e2b9e3539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"9d013c21-52a2-4271-b8b4-8ef070c994dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c500bb7c-6db8-45c2-8b20-b7e1ef2c575b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig"}}
	{"specversion":"1.0","id":"bdc814a4-1fad-452e-b86a-babe5f30ec5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube"}}
	{"specversion":"1.0","id":"58aa3d0a-329d-462d-a783-68c659839ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6cdd109e-3f7b-409f-afaf-b1875a105402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"79604d7c-abc8-4f52-a6fb-7b1bd047ccdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d4c8fff8-581a-4da1-aaaa-8309e683ffde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e1bd887-7567-4b20-a3c3-cca12fcdde4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"96ecfa2c-339a-45ff-afcd-e723aca14e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-170728 in cluster insufficient-storage-170728","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e745e74c-402b-4a58-aedb-5d52b8666f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1d7db19-730e-4665-aa46-c755f25d333f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"559af726-9997-418e-967d-b957adda95aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-170728 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-170728 --output=json --layout=cluster: exit status 7 (333.116482ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-170728","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-170728","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 17:07:38.078441  135015 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-170728" does not appear in /home/jenkins/minikube-integration/15232-3650/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-170728 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-170728 --output=json --layout=cluster: exit status 7 (344.838522ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-170728","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-170728","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 17:07:38.424007  135125 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-170728" does not appear in /home/jenkins/minikube-integration/15232-3650/kubeconfig
	E1031 17:07:38.432475  135125 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/insufficient-storage-170728/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-170728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-170728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-170728: (5.908752882s)
--- PASS: TestInsufficientStorage (15.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1905982165.exe start -p running-upgrade-170859 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1905982165.exe start -p running-upgrade-170859 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.513517915s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-170859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-170859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.451025019s)
helpers_test.go:175: Cleaning up "running-upgrade-170859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-170859

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-170859: (3.099703893s)
--- PASS: TestRunningBinaryUpgrade (93.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1357690478.exe start -p missing-upgrade-170744 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1357690478.exe start -p missing-upgrade-170744 --memory=2200 --driver=docker  --container-runtime=containerd: (1m12.678892747s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-170744

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-170744: (10.436796591s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-170744
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-170744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1031 17:09:14.061816   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-170744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m18.952511105s)
helpers_test.go:175: Cleaning up "missing-upgrade-170744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-170744

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-170744: (5.68656249s)
--- PASS: TestMissingContainerUpgrade (168.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.221065ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-170744] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170744 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170744 --driver=docker  --container-runtime=containerd: (40.389195153s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170744 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.3474551053.exe start -p stopped-upgrade-170744 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.3474551053.exe start -p stopped-upgrade-170744 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (51.107356465s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.3474551053.exe -p stopped-upgrade-170744 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.3474551053.exe -p stopped-upgrade-170744 stop: (2.064455526s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-170744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-170744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.920951544s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.618849058s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170744 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-170744 status -o json: exit status 2 (512.795731ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-170744","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-170744
E1031 17:08:41.813176   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-170744: (3.744150584s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170744 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.409037974s)
--- PASS: TestNoKubernetes/serial/Start (7.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.41715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.52266382s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-170744
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-170744: (1.29606339s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170744 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170744 --driver=docker  --container-runtime=containerd: (6.275204662s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (385.105754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestPause/serial/Start (59.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170907 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-170907 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.479942924s)
--- PASS: TestPause/serial/Start (59.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-170744
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-170744: (1.118935654s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170907 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-170907 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.166096075s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-171017 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-171017 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (322.601494ms)

                                                
                                                
-- stdout --
	* [false-171017] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:10:17.222818  171220 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:10:17.222994  171220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:10:17.223009  171220 out.go:309] Setting ErrFile to fd 2...
	I1031 17:10:17.223017  171220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:10:17.223173  171220 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
	I1031 17:10:17.223854  171220 out.go:303] Setting JSON to false
	I1031 17:10:17.225656  171220 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3167,"bootTime":1667233050,"procs":957,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:10:17.225743  171220 start.go:126] virtualization: kvm guest
	I1031 17:10:17.228700  171220 out.go:177] * [false-171017] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:10:17.230352  171220 out.go:177]   - MINIKUBE_LOCATION=15232
	I1031 17:10:17.230264  171220 notify.go:220] Checking for updates...
	I1031 17:10:17.231885  171220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:10:17.233704  171220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
	I1031 17:10:17.235330  171220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
	I1031 17:10:17.236859  171220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:10:17.238692  171220 config.go:180] Loaded profile config "missing-upgrade-170744": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I1031 17:10:17.238786  171220 config.go:180] Loaded profile config "pause-170907": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 17:10:17.238875  171220 config.go:180] Loaded profile config "running-upgrade-170859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1031 17:10:17.238920  171220 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:10:17.276945  171220 docker.go:137] docker version: linux-20.10.21
	I1031 17:10:17.277074  171220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1031 17:10:17.402554  171220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:64 SystemTime:2022-10-31 17:10:17.305923385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1031 17:10:17.402698  171220 docker.go:254] overlay module found
	I1031 17:10:17.448534  171220 out.go:177] * Using the docker driver based on user configuration
	I1031 17:10:17.450590  171220 start.go:282] selected driver: docker
	I1031 17:10:17.450632  171220 start.go:808] validating driver "docker" against <nil>
	I1031 17:10:17.450666  171220 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:10:17.453864  171220 out.go:177] 
	W1031 17:10:17.455503  171220 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1031 17:10:17.456922  171220 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-171017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-171017
--- PASS: TestNetworkPlugins/group/false (0.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-170907 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-170907 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-170907 --output=json --layout=cluster: exit status 2 (489.594962ms)

                                                
                                                
-- stdout --
	{"Name":"pause-170907","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-170907","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-170907 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-170907 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-170907 --alsologtostderr -v=5: (1.102177219s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-170907 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-170907 --alsologtostderr -v=5: (5.397158978s)
--- PASS: TestPause/serial/DeletePaused (5.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:173: (dbg) Run:  docker volume inspect pause-170907
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-170907: exit status 1 (38.180203ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-170907

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171107 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-171107 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m7.284290423s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (51.897871207s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171119 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [94039d9b-a2b7-4d88-b1a9-b1013728125a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [94039d9b-a2b7-4d88-b1a9-b1013728125a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.011646987s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171119 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171119 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.00953693s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-171119 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-171119 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-171119 --alsologtostderr -v=3: (20.056112523s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171119 -n no-preload-171119
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171119 -n no-preload-171119: exit status 7 (104.993397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-171119 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m15.049641591s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171119 -n no-preload-171119
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-171107 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [42754149-b21d-4c18-a697-ad99482873fb] Pending
helpers_test.go:342: "busybox" [42754149-b21d-4c18-a697-ad99482873fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [42754149-b21d-4c18-a697-ad99482873fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.012459486s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-171107 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-171107 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-171107 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-171107 --alsologtostderr -v=3
E1031 17:13:41.813533   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-171107 --alsologtostderr -v=3: (20.077269048s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171107 -n old-k8s-version-171107
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171107 -n old-k8s-version-171107: exit status 7 (105.256078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-171107 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (434.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171107 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-171107 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m14.169318884s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171107 -n old-k8s-version-171107
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (434.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-171419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-171419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (55.27098222s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-171419 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [bb1ec222-90e3-44c1-aece-0c5e7277a52c] Pending
helpers_test.go:342: "busybox" [bb1ec222-90e3-44c1-aece-0c5e7277a52c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [bb1ec222-90e3-44c1-aece-0c5e7277a52c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012558061s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-171419 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-171419 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-171419 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-171419 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-171419 --alsologtostderr -v=3: (20.098796816s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-171419 -n embed-certs-171419
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-171419 -n embed-certs-171419: exit status 7 (101.337248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-171419 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-171419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 17:15:53.549965   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 17:17:17.108507   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-171419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m9.258993283s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-171419 -n embed-certs-171419
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-l7gxt" [b9289c91-294f-42fe-857b-c13130211ed7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-l7gxt" [b9289c91-294f-42fe-857b-c13130211ed7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.012173609s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-l7gxt" [b9289c91-294f-42fe-857b-c13130211ed7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006387746s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-171119 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-171119 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-171119 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171119 -n no-preload-171119
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171119 -n no-preload-171119: exit status 2 (389.331982ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171119 -n no-preload-171119
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171119 -n no-preload-171119: exit status 2 (395.569685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-171119 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171119 -n no-preload-171119
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171119 -n no-preload-171119
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 17:18:41.813977   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (45.090341754s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171820 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2963ed41-72c7-4573-a58f-87ae0739bf25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2963ed41-72c7-4573-a58f-87ae0739bf25] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.01265388s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-171820 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-171820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-171820 --alsologtostderr -v=3
E1031 17:19:14.062078   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-171820 --alsologtostderr -v=3: (20.10768282s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820: exit status 7 (99.478648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-171820 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (9m19.551733488s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-172012 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 17:20:53.549703   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-172012 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (49.071547153s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-dktsx" [4b46baf5-5eec-4b6f-8d2c-562bff2f0a41] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-dktsx" [4b46baf5-5eec-4b6f-8d2c-562bff2f0a41] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.013265148s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-fqmcq" [433a146d-0df5-48f6-a369-c39fcd81a9ce] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015228407s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-172012 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-172012 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-172012 --alsologtostderr -v=3: (1.367276846s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172012 -n newest-cni-172012
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172012 -n newest-cni-172012: exit status 7 (101.406852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-172012 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-172012 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-172012 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (31.10740831s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-172012 -n newest-cni-172012
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-fqmcq" [433a146d-0df5-48f6-a369-c39fcd81a9ce] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00710962s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-171107 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-dktsx" [4b46baf5-5eec-4b6f-8d2c-562bff2f0a41] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008022649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-171419 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-171107 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-171107 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171107 -n old-k8s-version-171107
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171107 -n old-k8s-version-171107: exit status 2 (392.412348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171107 -n old-k8s-version-171107

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171107 -n old-k8s-version-171107: exit status 2 (400.011098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-171107 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171107 -n old-k8s-version-171107
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-171107 -n old-k8s-version-171107
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-171419 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-171419 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-171419 -n embed-certs-171419

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-171419 -n embed-certs-171419: exit status 2 (396.406083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-171419 -n embed-certs-171419
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-171419 -n embed-certs-171419: exit status 2 (393.68904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-171419 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-171419 -n embed-certs-171419

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-171419 -n embed-certs-171419
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (48.451436366s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-171017 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-171017 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (46.996276363s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-172012 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-172012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172012 -n newest-cni-172012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172012 -n newest-cni-172012: exit status 2 (446.889224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172012 -n newest-cni-172012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172012 -n newest-cni-172012: exit status 2 (428.901729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-172012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-172012 -n newest-cni-172012
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-172012 -n newest-cni-172012
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (101.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-171018 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E1031 17:21:44.858026   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-171018 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m41.263720821s)
--- PASS: TestNetworkPlugins/group/cilium/Start (101.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-rn8nd" [bb824991-c339-471d-88c7-79c14bd1b300] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013661852s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-171016 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-171016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-qv2p2" [c48ae0d7-1f57-4e59-a8cb-c55a864e66d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-qv2p2" [c48ae0d7-1f57-4e59-a8cb-c55a864e66d1] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005694582s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-171017 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-171017 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r98m2" [72bb2b3b-f2ea-45ca-9948-5a5af35bd455] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 17:22:12.017133   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.022433   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.032762   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.053113   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.093421   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.173698   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.334125   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:12.655234   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-r98m2" [72bb2b3b-f2ea-45ca-9948-5a5af35bd455] Running
E1031 17:22:13.296297   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.008244839s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-171016 exec deployment/netcat -- nslookup kubernetes.default
E1031 17:22:14.576576   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-171016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-171016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-171017 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-171017 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-171017 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (300.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E1031 17:22:32.498794   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:22:52.979751   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:23:15.378791   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.384125   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.394471   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.414824   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.455183   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.535484   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:15.695923   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:16.016267   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:16.657165   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:17.937627   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:20.498420   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (5m0.908271456s)
--- PASS: TestNetworkPlugins/group/bridge/Start (300.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-npvkt" [e7f112df-c78b-42e5-8bcf-9f8aa2875549] Running
E1031 17:23:25.618614   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016326787s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-171018 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-171018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-djf5x" [a7fd930e-48ed-48bf-b8dc-03023807180b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-djf5x" [a7fd930e-48ed-48bf-b8dc-03023807180b] Running
E1031 17:23:33.939924   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/no-preload-171119/client.crt: no such file or directory
E1031 17:23:35.859412   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006634207s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-171018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-171018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-171018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1031 17:23:56.340644   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/old-k8s-version-171107/client.crt: no such file or directory
E1031 17:23:56.598133   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 17:24:14.062184   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-171016 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (38.985397456s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-171016 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-171016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vrm5t" [a841f520-7953-48c7-bda5-3210e04d1d5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-vrm5t" [a841f520-7953-48c7-bda5-3210e04d1d5b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006975258s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-171016 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-171016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-2tw8x" [4bdc3056-8355-485a-8b4f-2976b8ccf3f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 17:27:25.018166   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/auto-171016/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-2tw8x" [4bdc3056-8355-485a-8b4f-2976b8ccf3f5] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006281358s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9lvhl" [29f411c4-4dd6-41b7-971a-e3bddc235b4e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011468999s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9lvhl" [29f411c4-4dd6-41b7-971a-e3bddc235b4e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9lvhl" [29f411c4-4dd6-41b7-971a-e3bddc235b4e] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1031 17:29:04.017484   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/cilium-171018/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008023646s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-171820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-171820 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-171820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820: exit status 2 (373.580084ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820: exit status 2 (392.012794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-171820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171820 -n default-k8s-diff-port-171820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)
E1031 17:29:14.061956   10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory

                                                
                                    

Test skip (23/277)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-171820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-171820
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-171016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-171016
--- SKIP: TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-171016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-171016
--- SKIP: TestNetworkPlugins/group/flannel (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-171017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-171017
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.26s)

                                                
                                    
Copied to clipboard